LS-DYNA: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
mNo edit summary
mNo edit summary
 
(55 intermediate revisions by 3 users not shown)
Line 4: Line 4:
<translate>
<translate>
= Introduction = <!--T:1-->
= Introduction = <!--T:1-->
[http://www.lstc.com LS-DYNA] is available on all our systems.  It is used for many [http://www.lstc.com/applications applications] to solve problems in multi-physics, solid mechanics, heat transfer and fluid dynamics either as separate phenomena or as coupled physic such as thermal stress or fluid structure interaction.  LSTC was recently purchased by ANSYS, therefore the software may eventually be provided as part of the ANSYS module. For now we recommended using LS-DYNA as documented in this wiki page.
[http://www.lstc.com LS-DYNA] is available on all our systems.  It is used for many [http://www.lstc.com/applications applications] to solve problems in multiphysics, solid mechanics, heat transfer and fluid dynamics.  Analyses are performed as separate phenomena or coupled physics simulations such as thermal stress or fluid structure interaction.  LSTC was recently purchased by ANSYS so the LS-DYNA software may eventually be exclusively provided as part of the ANSYS module. For now we recommend using the LS-DYNA software traditionally provided by LSTC as documented in this wiki page.


= Licensing = <!--T:2-->
= Licensing = <!--T:2-->
Compute Canada is a hosting provider for LS-DYNA. This means that we have LS-DYNA software installed on our clusters, but we do not provide a generic license accessible to everyone. However, many institutions, faculties, and departments already have local licenses that can be used on our clusters.  Researchers can also purchase a license directly from the company to run on a SHARCNET license server for dedicated use on any Compute Canada system.
The Alliance is a hosting provider for LS-DYNA. This means that we have LS-DYNA software installed on our clusters.  The Alliance does NOT however provide a generic license accessible to everyone or provide license hosting services. Instead many institutions, faculties, and departments already have licenses that can be used on our clusters.  Before using such licenses some cluster specific network changes may need to be done (to ensure its reachable from compute nodes).  In cases where a license has already been used on a particular cluster these changes may already be done.  Users unable to locate or arrange for a license on campus may contact [https://www.cmc.ca/support/ CMC Microsystems].  Licenses purchased from CMC do not require the overhead of hosting a local license server since they are hosted on remote server system that CMC manages with the added benefit of being usable anywhere.  If you have your own server and need a quote for a locally managed license consider contacting [https://simutechgroup.com Simutech] or contact ANSYS directly.  SHARCNET does not provide any free LS-DYNA licenses or license hosting services at this time.


<!--T:3-->
=== Initial Setup and Testing === <!--T:102-->
Once a license is set up, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases, this has already been done. You should then be able to load a ls-dyna Compute Canada module, and it should find your license automatically.  For assistance please contact our [[Technical support]].


== Configuring your license file == <!--T:4-->
<!--T:140-->
Our module for LS-DYNA is designed to look for license information in a few places, one of which is your home folder. If you have your own license server, you can write the information to access it in the following format:  
If your (existing or new) license server has never been used on the cluster where you plan to run jobs, firewall changes will first need to be done on both the cluster side and server side.  This will typically require involvement from both our technical team and the technical people managing your license software.  To arrange this, send an email containing the service port and IP address of your floating license server to [[Technical support]]. To check if your license file is working run the following commands:
 
<!--T:191-->
<code>module load ls-dyna
ls-dyna_s or ls-dyna_d</code>
 
<!--T:160-->
where its not necessary to specify any input file or arguments to run this test.  The output header should contain a (non-empty) value for <code>Licensed to:</code> with the exception of CMC license servers.  Press ^C to quit the program and return to the command line.
 
== Configuring your license == <!--T:4-->
 
<!--T:192-->
In 2019 Ansys purchased the Livermore Software Technology Corporation (LSTC), the developer of LS-DYNA.  LS-DYNA licenses issued by Ansys since that time use <b>ANSYS License servers</b>.  Licenses issued by LSTC may still use an <b>LSTC license server</b>.  Also, some of our users obtain an LS-DYNA license through [https://www.cmc.ca/ CMC Microsystems].  This section explains how to configure your account or job script for each of these cases.  
 
=== LSTC License === <!--T:181-->
 
<!--T:182-->
If you have a license issued to run on a LSTC License Server, there are two options to specify it:
 
<!--T:183-->
Option 1) Specify your license server by creating a small file named <tt>ls-dyna.lic</tt> with the following contents:
{{File
{{File
|name=ls-dyna.lic
|name=ls-dyna.lic
Line 20: Line 39:
#LICENSE_TYPE: network
#LICENSE_TYPE: network
#LICENSE_SERVER:<port>@<server>
#LICENSE_SERVER:<port>@<server>
}}and put this file in the folder <tt>$HOME/.licenses/</tt> on each cluster where you plan to submit LS-DYNA jobsNote that firewall changes will need to be done on both our side and yoursTo arrange this, send an email containing the service port and IP address of your floating license server to [[Technical support]]. To check if your new license file is responding run the following commands:
}}
  <code>module load ls-dyna
where <port> is an integer number and <server> is the hostname of your LSTC License server.  Put this file in directory <tt>$HOME/.licenses/</tt> on each cluster where you plan to submit jobs. The values in the file are picked up by lsdyna when it runs. This occurs because the alliance module system sets <code>LSTC_FILE=/home/$USER.licenses/ls-dyna.lic</code> whenever you load <code>ls-dyna</code> or <code>ls-dyna-mpi</code> modulesThis approach is recommended for users with a license hosted on a LSTC License Server since (compared to the next option) the identical settings will automatically be used by all jobs you submit on the cluster (without the need to specify them in each individual slurm script or setting them in your environment).
  ls-dyna_s</code>
 
The output should contain defined (non-empty) values for <code>Licensed to:</code> and <code>Issued by:</code>, press ^c to exit.
<!--T:184-->
Option 2) Specify your license server by setting the following two environment variables in your slurm scripts: 
export LSTC_LICENSE=network
  export LSTC_LICENSE_SERVER=<port>@<server>
where <port> is an integer number and <server> is the hostname or IP address of your LSTC License server.  These variables will take priority over any values specified in your <code>~/.licenses/ls-dyna.lic</code> file which must exist (even if its empty) for any <code>ls-dyna</code> or <code>ls-dyna-mpi</code> module to successfully load;  To ensure it exists run <code>touch ~/.licenses/ls-dyna.lic</code> once on the command line on each cluster you will submit jobs on.  For further details see the official [https://lsdyna.ansys.com/download-install-overview/ documentation].
 
=== ANSYS License === <!--T:185-->
 
<!--T:186-->
If your LS-DYNA license is hosted on an ANSYS License Server then set the following two environment variables in your slurm scripts:
export LSTC_LICENSE=ansys
export ANSYSLMD_LICENSE_FILE=<port>@<server>
where <port> is an integer number and <server> is the hostname or IP address of your ANSYS License server. These variables cannot be defined in your <code>~/.licenses/ls-dyna.lic</code> file.  The file however must exist (even if its empty) for any <code>ls-dyna</code> module to load.  To ensure this run <code>touch ~/.licenses/ls-dyna.lic</code> once from the command line (or each time in your slurm scripts).  Note that only module versions >= 12.2.1 will work with ANSYS License Servers.
 
==== SHARCNET ==== <!--T:285-->
 
<!--T:286-->
The SHARCNET ansys license supports running SMP and MPP lsdyna jobs.  It can be used freely by anyone (on a core and job limited basis tba) on graham, narval or cedar cluster by adding the following lines into your slurm script:
export LSTC_LICENSE=ansys
export ANSYSLMD_LICENSE_FILE=1055@license3.sharcnet.ca
 
=== CMC License === <!--T:187-->
 
<!--T:188-->
If your LS-DYNA license was purchased from CMC then set the following two environment variables depending on which cluster you are using:
  export LSTC_LICENSE=ansys
Beluga:  export ANSYSLMD_LICENSE_FILE=6624@10.20.73.21
Cedar:  export ANSYSLMD_LICENSE_FILE=6624@172.16.121.25
Graham:  export ANSYSLMD_LICENSE_FILE=6624@199.241.167.222
Narval:  export ANSYSLMD_LICENSE_FILE=6624@10.100.64.10
Niagara: export ANSYSLMD_LICENSE_FILE=6624@172.16.205.199
 
<!--T:190-->
where the various IP address correspond to the respective CADpass servers.  No firewall changes are required to use a CMC license on any cluster since these have already been done.  Since the remote CMC server that hosts ls-dyna licenses is ANSYS based, these variables cannot be defined in your <code>~/.licenses/ls-dyna.lic</code> file.  The file however must exist (even if its empty) for any <code>ls-dyna</code> module to load.  To ensure this is the case run <code>touch ~/.licenses/ls-dyna.lic</code> once from the command line (or each time in your slurm scripts).  Note that only module versions >= 13.1.1 will work with ANSYS License Servers.


= Cluster batch job submission = <!--T:20-->
= Cluster batch job submission = <!--T:20-->


<!--T:29-->
<!--T:21-->
LS-DYNA provides binaries for running jobs on a single compute node (SMP - Shared Memory Parallel using OpenMP) or across a multiple compute nodes (MPP - Message Passing Parallel using MPI).  Note that for the sample slurm scripts that follow the StdEnv/2020 module must be loaded before ls-dyna/12.0 or ls-dyna-mpi/12.0 can be loaded:
LS-DYNA provides binaries for running jobs on a single compute node (SMP - Shared Memory Parallel using OpenMP) or across a multiple compute nodes (MPP - Message Passing Parallel using MPI).  This section provides slurm scripts for each job type.


== Single Node Jobs == <!--T:22-->
== Single node jobs == <!--T:22-->


<!--T:23-->
<!--T:23-->
Modules for running jobs on a single compute node can be listed with: <code>module spider ls-dyna</code>.  Jobs maybe submitted to the queue with: <code>sbatch script-smp.sh</code>. The following sample slurm script shows howto to run LS-DYNA with 8 cores on a single cluster compute node:
Modules for running jobs on a single compute node can be listed with: <code>module spider ls-dyna</code>.  Jobs may be submitted to the queue with: <code>sbatch script-smp.sh</code>. The following sample slurm script shows how to to run LS-DYNA with 8 cores on a single cluster compute node. The AUTO setting allows explicit simulations to allocate more memory than the 100M word size default at runtime:
{{File
{{File
|name=script-smp.sh
|name=script-smp.sh
Line 39: Line 91:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-account # Specify
#SBATCH --account=def-account   # Specify
#SBATCH --time=0-03:00         # D-HH:MM
#SBATCH --time=0-03:00         # D-HH:MM
#SBATCH --cpus-per-task=8     # Specify number of cores
#SBATCH --cpus-per-task=8       # Specify number of cores
#SBATCH --mem=16G             # Specify total memory
#SBATCH --mem=16G               # Specify total memory
#SBATCH --nodes=1             # Do not change
#SBATCH --nodes=1               # Do not change
 
<!--T:51-->
#export RSNT_ARCH=avx2          # Uncomment on beluga for versions < 14.1.0
 
<!--T:39-->
module load StdEnv/2020        # Versions < 14.1.0
module load ls-dyna/13.1.1


<!--T:38-->
#module load StdEnv/2023        # Versions > 13.1.1 (coming soon)
#module load intel/2023.2.1
#module load ls-dyna/12.2.1


<!--T:30-->
<!--T:170-->
module load StdEnv/2016.4     # Versions =< 11.1
#export LSTC_LICENSE=ansys     # Specify an ANSYS License Server
module load ls-dyna/11.1
#export ANSYSLMD_LICENSE_FILE=<port>@<server>


<!--T:31-->
<!--T:40-->
# module load StdEnv/2020      # Versions >= 12.0
export LSTC_MEMORY=AUTO       
# module load ls-dyna/12.0


<!--T:32-->
<!--T:41-->
ls-dyna_s ncpu=$SLURM_CPUS_ON_NODE i=airbag.deploy.k
ls-dyna_s ncpu=$SLURM_CPUS_ON_NODE i=airbag.deploy.k memory=100M
}} where  
}} where  
  ls-dyna_s = single precision smp solver
  ls-dyna_s = single precision smp solver
  ls-dyna_d = double precision smp solver
  ls-dyna_d = double precision smp solver


== Multiple Node Jobs == <!--T:24-->
== Multiple node jobs == <!--T:24-->


The module versions available for running jobs on multiple nodes (using the prebuilt MPP prebuilt binaries provided by LS-DYNA) can be listed with <code>module spider ls-dyna-mpi</code>. To submit jobs to the queue use: <code>sbatch script-mpp.sh</code>.  Sample scripts for submitting jobs to a specified total number of whole nodes (by node) *OR* a specified total number of cores (by core) are provided next:
<!--T:42-->
The module versions available for running jobs on multiple nodes can be listed with <code>module spider ls-dyna-mpi</code>.   To submit jobs to the queue use: <code>sbatch script-mpp.sh</code>.  Sample scripts for submitting jobs to a specified total number of whole nodes *OR* a specified total number of cores follow:


=== By Node === <!--T:25-->
=== Specify node count === <!--T:25-->


<!--T:26-->
<!--T:26-->
Jobs can be submitted to a specified number of whole compute nodes with the following script:
Jobs can be submitted to a specified number of <b>whole</b> compute nodes with the following script:
{{File
{{File
|name=script-mpp-bynode.sh
|name=script-mpp-bynode.sh
Line 73: Line 136:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-account   # Specify
#SBATCH --account=def-account   # Specify
#SBATCH --time=0-03:00         # D-HH:MM
#SBATCH --time=0-03:00           # D-HH:MM
#SBATCH --ntasks-per-node=32   # Specify number cores per node (graham 32 or 44, cedar 32 or 48, beluga 40)
#SBATCH --ntasks-per-node=32     # Specify number cores per node (graham 32 or 44, cedar 48, beluga 40)
#SBATCH --nodes=2               # Specify number compute nodes (1 or more)
#SBATCH --nodes=2               # Specify number compute nodes (1 or more)
#SBATCH --mem=0                 # Use all memory per compute node (do not change)
#SBATCH --mem=0                 # Use all memory per compute node (do not change)
 
<!--T:52-->
#export RSNT_ARCH=avx2          # Uncomment on beluga for versions < 14.1.0
 
<!--T:44-->
module load StdEnv/2020          # Versions < 14.1.0
module load ls-dyna-mpi/13.1.1
 
<!--T:43-->
#module load StdEnv/2023        # Versions > 13.1.1 (coming soon)
#module load intel/2023.2.1
#module load ls-dyna-mpi/12.2.1


# module load StdEnv/2016.4    # Versions =< 11.1
<!--T:175-->
# module load ls-dyna-mpi/11.1
#export LSTC_LICENSE=ansys      # Specify an ANSYS License Server
#export ANSYSLMD_LICENSE_FILE=<port>@<server>


module load StdEnv/2020        # Versions >= 12.0
<!--T:45-->
module load ls-dyna-mpi/12.0
export LSTC_MEMORY=AUTO


srun ls-dyna_d i=airbag.deploy.k
<!--T:46-->
srun ls-dyna_d i=airbag.deploy.k memory=8G memory2=200M
}} where  
}} where  
  ls-dyna_s = single precision mpp solver
  ls-dyna_s = single precision mpp solver
  ls-dyna_d = double precision mpp solver
  ls-dyna_d = double precision mpp solver


=== By Core === <!--T:27-->
=== Specify core count === <!--T:27-->


<!--T:28-->
<!--T:28-->
Jobs can be submitted to an arbitrary number of compute nodes by specifying the total number of cores.  This approach allows the slurm schedular to automatically determine the number of compute nodes to minimumize the wait time in the queue:
Jobs can be submitted to an arbitrary number of compute nodes by specifying the number of cores.  This approach allows the slurm scheduler to determine the optimal number of compute nodes to help minimize job wait time in the queue.  Memory limits are applied per core therefore a sufficiently large value of <tt>mem-per-cpu</tt> must be specified so the master processor can successfully distribute and manage the computations.  Note that requesting a total amount of memory instead of using the <tt>mem-per-cpu</tt> option may not be as efficient as the other job submission methods described so far.
{{File
{{File
|name=script-mpp-bycore.sh
|name=script-mpp-bycore.sh
Line 99: Line 176:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-account # Specify
#SBATCH --account=def-account     # Specify
#SBATCH --time=0-03:00         # D-HH:MM
#SBATCH --time=0-03:00           # D-HH:MM
#SBATCH --ntasks=64           # Specify total number of cores
#SBATCH --ntasks=64               # Specify total number of cores
#SBATCH --mem-per-cpu=2G       # Specify memory per core
#SBATCH --mem-per-cpu=2G         # Specify memory per core
 
<!--T:53-->
#export RSNT_ARCH=avx2          # Uncomment on beluga for versions < 14.1.0
 
<!--T:48-->
module load StdEnv/2020          # Versions < 14.1.0
module load ls-dyna-mpi/13.1.1
 
<!--T:47-->
#module load StdEnv/2023        # Versions > 13.1.1 (coming soon)
#module load intel/2023.2.1
#module load ls-dyna-mpi/12.2.1


# module load StdEnv/2016.4    # Versions =< 11.1
<!--T:180-->
# module load ls-dyna-mpi/11.1
#export LSTC_LICENSE=ansys      # Specify an ANSYS License Server
#export ANSYSLMD_LICENSE_FILE=<port>@<server>


module load StdEnv/2020        # Versions >= 12.0
<!--T:49-->
module load ls-dyna-mpi/12.0
export LSTC_MEMORY=AUTO


<!--T:50-->
srun ls-dyna_d i=airbag.deploy.k
srun ls-dyna_d i=airbag.deploy.k
}} where  
}} where  
  ls-dyna_s = single precision mpp solver
  ls-dyna_s = single precision mpp solver
  ls-dyna_d = double precision mpp solver
  ls-dyna_d = double precision mpp solver
== Performance Testing == <!--T:30-->


<!--T:29-->
<!--T:29-->
Depending on the simulation LS-DYNA maybe unable to efficiently use many cores.  Therefore before running a simulation test the scaling properties of the simulation by gradually increasing the number of cores to determine the optimal value before simulation slowdown occurs.  To determine the Job Wall-clock time, CPU Efficiency and Memory Efficiency for a successfully completed job use the <tt>seff jobnumber</tt> command.
Depending on the simulation LS-DYNA may (or may not) be able to efficiently use very many cores in parallel.  Therefore before running a full simulation, standard scaling tests should be run to determine the optimal number of cores that can be used before simulation slowdown occurs, where the <tt>seff jobnumber</tt> command can be used to determine the Job Wall-clock time, CPU Efficiency and Memory Efficiency of successfully completed test jobs.  In addition, recent testing with airbag jobs submitted to the queue on different clusters found significantly better performance on cedar and narval than graham.  The testing was done with 6 cores on a single node using the ls-dyna/12.2.1 module and 6 cores evenly distributed across two nodes using the ls-dyna-mpi/12.2.1 module.  Although limited the results point out that significant performance variance can occur on different systems for a given simulation setup.  Therefore before running full LD-DYNA simulations its recommended to both A) conduct standard scaling tests on a given cluster and B) run identical test cases on each cluster before settling on an optimal job size, module version and cluster configuration.


= Remote visualization = <!--T:27-->
= Visualization with LS-PrePost= <!--T:31-->


<!--T:28-->
<!--T:32-->
LSTC provides [https://www.lstc.com/products/ls-prepost LS-PrePost] for pre- and post-processing of LS-DYNA [https://www.dynaexamples.com/ models].  This program is made available by a separate module. It does not require a license and can be used on any cluster node or on the Graham VDI nodes by following these steps:
LSTC provides [https://www.lstc.com/products/ls-prepost LS-PrePost] for pre- and post-processing of LS-DYNA [https://www.dynaexamples.com/ models].  This program is made available by a separate module. It does not require a license and can be used on any cluster node or the Graham VDI nodes:


== Cluster nodes == <!--T:36-->
== Cluster nodes == <!--T:36-->
Connect to a compute node or to a login node with [https://docs.computecanada.ca/wiki/VNC#Connect TigerVNC]
Connect to a compute node or to a login node with [[VNC#VDI_Nodes|TigerVNC]] and open a terminal:
  module load ls-prepost
module load StdEnv/2020
  module load ls-prepost/4.8
  lsprepost
  lsprepost
module load ls-prepost/4.9
lsprepost OR lspp49


== VDI nodes == <!--T:37-->
== VDI nodes == <!--T:37-->
Connect to gra-vdi with [https://docs.computecanada.ca/wiki/VNC#VDI_Nodes TigerVNC]
Connect to gra-vdi with [[VNC#VDI_Nodes|TigerVNC]] and open a new terminal:
  module load CcEnv StdEnv
  module load CcEnv StdEnv/2020
  module load ls-prepost
  module load ls-prepost/4.8
  lsprepost
  lsprepost
module load ls-prepost/4.9
lsprepost OR lspp49
</translate>
</translate>

Latest revision as of 17:06, 16 October 2024

Other languages:

Introduction[edit]

LS-DYNA is available on all our systems. It is used for many applications to solve problems in multiphysics, solid mechanics, heat transfer and fluid dynamics. Analyses are performed as separate phenomena or coupled physics simulations such as thermal stress or fluid structure interaction. LSTC was recently purchased by ANSYS so the LS-DYNA software may eventually be exclusively provided as part of the ANSYS module. For now we recommend using the LS-DYNA software traditionally provided by LSTC as documented in this wiki page.

Licensing[edit]

The Alliance is a hosting provider for LS-DYNA. This means that we have LS-DYNA software installed on our clusters. The Alliance does NOT however provide a generic license accessible to everyone or provide license hosting services. Instead many institutions, faculties, and departments already have licenses that can be used on our clusters. Before using such licenses some cluster specific network changes may need to be done (to ensure its reachable from compute nodes). In cases where a license has already been used on a particular cluster these changes may already be done. Users unable to locate or arrange for a license on campus may contact CMC Microsystems. Licenses purchased from CMC do not require the overhead of hosting a local license server since they are hosted on remote server system that CMC manages with the added benefit of being usable anywhere. If you have your own server and need a quote for a locally managed license consider contacting Simutech or contact ANSYS directly. SHARCNET does not provide any free LS-DYNA licenses or license hosting services at this time.

Initial Setup and Testing[edit]

If your (existing or new) license server has never been used on the cluster where you plan to run jobs, firewall changes will first need to be done on both the cluster side and server side. This will typically require involvement from both our technical team and the technical people managing your license software. To arrange this, send an email containing the service port and IP address of your floating license server to Technical support. To check if your license file is working run the following commands:

module load ls-dyna
ls-dyna_s or ls-dyna_d

where its not necessary to specify any input file or arguments to run this test. The output header should contain a (non-empty) value for Licensed to: with the exception of CMC license servers. Press ^C to quit the program and return to the command line.

Configuring your license[edit]

In 2019 Ansys purchased the Livermore Software Technology Corporation (LSTC), the developer of LS-DYNA. LS-DYNA licenses issued by Ansys since that time use ANSYS License servers. Licenses issued by LSTC may still use an LSTC license server. Also, some of our users obtain an LS-DYNA license through CMC Microsystems. This section explains how to configure your account or job script for each of these cases.

LSTC License[edit]

If you have a license issued to run on a LSTC License Server, there are two options to specify it:

Option 1) Specify your license server by creating a small file named ls-dyna.lic with the following contents:

File : ls-dyna.lic

#LICENSE_TYPE: network
#LICENSE_SERVER:<port>@<server>


where <port> is an integer number and <server> is the hostname of your LSTC License server. Put this file in directory $HOME/.licenses/ on each cluster where you plan to submit jobs. The values in the file are picked up by lsdyna when it runs. This occurs because the alliance module system sets LSTC_FILE=/home/$USER.licenses/ls-dyna.lic whenever you load ls-dyna or ls-dyna-mpi modules. This approach is recommended for users with a license hosted on a LSTC License Server since (compared to the next option) the identical settings will automatically be used by all jobs you submit on the cluster (without the need to specify them in each individual slurm script or setting them in your environment).

Option 2) Specify your license server by setting the following two environment variables in your slurm scripts:

export LSTC_LICENSE=network
export LSTC_LICENSE_SERVER=<port>@<server>

where <port> is an integer number and <server> is the hostname or IP address of your LSTC License server. These variables will take priority over any values specified in your ~/.licenses/ls-dyna.lic file which must exist (even if its empty) for any ls-dyna or ls-dyna-mpi module to successfully load; To ensure it exists run touch ~/.licenses/ls-dyna.lic once on the command line on each cluster you will submit jobs on. For further details see the official documentation.

ANSYS License[edit]

If your LS-DYNA license is hosted on an ANSYS License Server then set the following two environment variables in your slurm scripts:

export LSTC_LICENSE=ansys
export ANSYSLMD_LICENSE_FILE=<port>@<server>

where <port> is an integer number and <server> is the hostname or IP address of your ANSYS License server. These variables cannot be defined in your ~/.licenses/ls-dyna.lic file. The file however must exist (even if its empty) for any ls-dyna module to load. To ensure this run touch ~/.licenses/ls-dyna.lic once from the command line (or each time in your slurm scripts). Note that only module versions >= 12.2.1 will work with ANSYS License Servers.

SHARCNET[edit]

The SHARCNET ansys license supports running SMP and MPP lsdyna jobs. It can be used freely by anyone (on a core and job limited basis tba) on graham, narval or cedar cluster by adding the following lines into your slurm script:

export LSTC_LICENSE=ansys
export ANSYSLMD_LICENSE_FILE=1055@license3.sharcnet.ca

CMC License[edit]

If your LS-DYNA license was purchased from CMC then set the following two environment variables depending on which cluster you are using:

export LSTC_LICENSE=ansys
Beluga:  export ANSYSLMD_LICENSE_FILE=6624@10.20.73.21
Cedar:   export ANSYSLMD_LICENSE_FILE=6624@172.16.121.25
Graham:  export ANSYSLMD_LICENSE_FILE=6624@199.241.167.222
Narval:  export ANSYSLMD_LICENSE_FILE=6624@10.100.64.10
Niagara: export ANSYSLMD_LICENSE_FILE=6624@172.16.205.199

where the various IP address correspond to the respective CADpass servers. No firewall changes are required to use a CMC license on any cluster since these have already been done. Since the remote CMC server that hosts ls-dyna licenses is ANSYS based, these variables cannot be defined in your ~/.licenses/ls-dyna.lic file. The file however must exist (even if its empty) for any ls-dyna module to load. To ensure this is the case run touch ~/.licenses/ls-dyna.lic once from the command line (or each time in your slurm scripts). Note that only module versions >= 13.1.1 will work with ANSYS License Servers.

Cluster batch job submission[edit]

LS-DYNA provides binaries for running jobs on a single compute node (SMP - Shared Memory Parallel using OpenMP) or across a multiple compute nodes (MPP - Message Passing Parallel using MPI). This section provides slurm scripts for each job type.

Single node jobs[edit]

Modules for running jobs on a single compute node can be listed with: module spider ls-dyna. Jobs may be submitted to the queue with: sbatch script-smp.sh. The following sample slurm script shows how to to run LS-DYNA with 8 cores on a single cluster compute node. The AUTO setting allows explicit simulations to allocate more memory than the 100M word size default at runtime:

File : script-smp.sh

#!/bin/bash
#SBATCH --account=def-account   # Specify
#SBATCH --time=0-03:00          # D-HH:MM
#SBATCH --cpus-per-task=8       # Specify number of cores
#SBATCH --mem=16G               # Specify total memory
#SBATCH --nodes=1               # Do not change

#export RSNT_ARCH=avx2          # Uncomment on beluga for versions < 14.1.0

module load StdEnv/2020         # Versions < 14.1.0
module load ls-dyna/13.1.1

#module load StdEnv/2023        # Versions > 13.1.1 (coming soon)
#module load intel/2023.2.1
#module load ls-dyna/12.2.1

#export LSTC_LICENSE=ansys      # Specify an ANSYS License Server
#export ANSYSLMD_LICENSE_FILE=<port>@<server>

export LSTC_MEMORY=AUTO        

ls-dyna_s ncpu=$SLURM_CPUS_ON_NODE i=airbag.deploy.k memory=100M
where 
ls-dyna_s = single precision smp solver
ls-dyna_d = double precision smp solver

Multiple node jobs[edit]

The module versions available for running jobs on multiple nodes can be listed with module spider ls-dyna-mpi. To submit jobs to the queue use: sbatch script-mpp.sh. Sample scripts for submitting jobs to a specified total number of whole nodes *OR* a specified total number of cores follow:

Specify node count[edit]

Jobs can be submitted to a specified number of whole compute nodes with the following script:

File : script-mpp-bynode.sh

#!/bin/bash
#SBATCH --account=def-account    # Specify
#SBATCH --time=0-03:00           # D-HH:MM
#SBATCH --ntasks-per-node=32     # Specify number cores per node (graham 32 or 44, cedar 48, beluga 40)
#SBATCH --nodes=2                # Specify number compute nodes (1 or more)
#SBATCH --mem=0                  # Use all memory per compute node (do not change)

#export RSNT_ARCH=avx2           # Uncomment on beluga for versions < 14.1.0

module load StdEnv/2020          # Versions < 14.1.0
module load ls-dyna-mpi/13.1.1 

#module load StdEnv/2023         # Versions > 13.1.1 (coming soon)
#module load intel/2023.2.1
#module load ls-dyna-mpi/12.2.1

#export LSTC_LICENSE=ansys       # Specify an ANSYS License Server
#export ANSYSLMD_LICENSE_FILE=<port>@<server>

export LSTC_MEMORY=AUTO 

srun ls-dyna_d i=airbag.deploy.k memory=8G memory2=200M
where 
ls-dyna_s = single precision mpp solver
ls-dyna_d = double precision mpp solver

Specify core count[edit]

Jobs can be submitted to an arbitrary number of compute nodes by specifying the number of cores. This approach allows the slurm scheduler to determine the optimal number of compute nodes to help minimize job wait time in the queue. Memory limits are applied per core therefore a sufficiently large value of mem-per-cpu must be specified so the master processor can successfully distribute and manage the computations. Note that requesting a total amount of memory instead of using the mem-per-cpu option may not be as efficient as the other job submission methods described so far.

File : script-mpp-bycore.sh

#!/bin/bash
#SBATCH --account=def-account     # Specify
#SBATCH --time=0-03:00            # D-HH:MM
#SBATCH --ntasks=64               # Specify total number of cores
#SBATCH --mem-per-cpu=2G          # Specify memory per core

#export RSNT_ARCH=avx2           # Uncomment on beluga for versions < 14.1.0

module load StdEnv/2020           # Versions < 14.1.0
module load ls-dyna-mpi/13.1.1

#module load StdEnv/2023         # Versions > 13.1.1 (coming soon)
#module load intel/2023.2.1
#module load ls-dyna-mpi/12.2.1

#export LSTC_LICENSE=ansys       # Specify an ANSYS License Server
#export ANSYSLMD_LICENSE_FILE=<port>@<server>

export LSTC_MEMORY=AUTO

srun ls-dyna_d i=airbag.deploy.k
where 
ls-dyna_s = single precision mpp solver
ls-dyna_d = double precision mpp solver

Performance Testing[edit]

Depending on the simulation LS-DYNA may (or may not) be able to efficiently use very many cores in parallel. Therefore before running a full simulation, standard scaling tests should be run to determine the optimal number of cores that can be used before simulation slowdown occurs, where the seff jobnumber command can be used to determine the Job Wall-clock time, CPU Efficiency and Memory Efficiency of successfully completed test jobs. In addition, recent testing with airbag jobs submitted to the queue on different clusters found significantly better performance on cedar and narval than graham. The testing was done with 6 cores on a single node using the ls-dyna/12.2.1 module and 6 cores evenly distributed across two nodes using the ls-dyna-mpi/12.2.1 module. Although limited the results point out that significant performance variance can occur on different systems for a given simulation setup. Therefore before running full LD-DYNA simulations its recommended to both A) conduct standard scaling tests on a given cluster and B) run identical test cases on each cluster before settling on an optimal job size, module version and cluster configuration.

Visualization with LS-PrePost[edit]

LSTC provides LS-PrePost for pre- and post-processing of LS-DYNA models. This program is made available by a separate module. It does not require a license and can be used on any cluster node or the Graham VDI nodes:

Cluster nodes[edit]

Connect to a compute node or to a login node with TigerVNC and open a terminal:

module load StdEnv/2020

module load ls-prepost/4.8
lsprepost

module load ls-prepost/4.9
lsprepost OR lspp49

VDI nodes[edit]

Connect to gra-vdi with TigerVNC and open a new terminal:

module load CcEnv StdEnv/2020

module load ls-prepost/4.8
lsprepost

module load ls-prepost/4.9
lsprepost OR lspp49