LS-DYNA
Introduction[edit]
LS-DYNA is available on all our systems. It is used for many applications to solve problems in multiphysics, solid mechanics, heat transfer and fluid dynamics. Analyses are performed as separate phenomena or coupled physics simulations such as thermal stress or fluid structure interaction. LSTC was recently purchased by ANSYS so the LS-DYNA software may eventually be exclusively provided as part of the ANSYS module. For now we recommend using the LS-DYNA software traditionally provided by LSTC as documented in this wiki page.
Licensing[edit]
The Alliance is a hosting provider for LS-DYNA. This means that we have LS-DYNA software installed on our clusters, but we do not provide a generic license accessible to everyone. However, many institutions, faculties, and departments already have local licenses that can be used on our clusters. Researchers can also purchase a license directly from the company to run on a SHARCNET license server for dedicated use on any Alliance system.
Once a license is set up, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases, this has already been done. You should then be able to load an ls-dyna
module, and it should find your license automatically. For assistance please contact our Technical support.
Configuring your license file[edit]
In 2019 Ansys purchased the developer of LS-DYNA from Livermore Software Technology Corporation (LSTC). As a result, LS-DYNA licenses are increasing being issued by Ansys to run on existing ANSYS License servers instead of dedicated <b<LSTC license servers configured with LSTC License manager software. This section explains how to configure your account for both types.
LSTC License Servers[edit]
If you have a license issued to run on a LSTC License Server, there are two options to specify it:
Option 1) Specify your license server by creating a small file named ls-dyna.lic with the following contents:
#LICENSE_TYPE: network
#LICENSE_SERVER:<port>@<server>
where <port> is an integer number and <server> is the hostname of your LSTC License server. Put this file in directory $HOME/.licenses/ on each cluster where you plan to submit jobs. The values in the file are picked up by lsdyna when it runs. This occurs because the alliance module system sets LSTC_FILE=/home/$USER.licenses/ls-dyna.lic
whenever you load ls-dyna
or ls-dyna-mpi
modules. This approach is recommended (compared to option 2) when specifying a LSTC License Server since the identical settings will automatically be utilized by all jobs submitted from your account (without the need to specify them in each individual slurm script or setting them in your environment).
Option 2) Specify your license server by setting the following two environment variables in your slurm scripts:
export LSTC_LICENSE=network export LSTC_LICENSE_SERVER=<port>@<server>
where <port> is an integer number and <server> is the hostname or IP address of your LSTC License server. These variables will take priority over any values specified in your ~/.licenses/ls-dyna.lic
file which must exist (even if its empty) for any ls-dyna
or ls-dyna-mpi
module to successfully load; To ensure it exists run touch ~/.licenses/ls-dyna.lic
once on the command line on each cluster you will submit jobs on. For further details see the official documentation.
ANSYS License Servers[edit]
If your LS-DYNA license is hosted on an ANSYS License Server then set the following two environment variables in your slurm scripts:
export LSTC_LICENSE=ansys export ANSYSLMD_LICENSE_FILE=<port>@<server>
where <port> is an integer number and <server> is the hostname or IP address of your ANSYS License server. These variables cannot be defined in your ~/.licenses/ls-dyna.lic
file however it must exist (even if its empty) for any ls-dyna
module to load. To ensure this is the case run touch ~/.licenses/ls-dyna.lic
once from the command line (or each time in your slurm scripts). Note that only module versions >= 13.1.1 will work with ANSYS License Servers.
CMC License Holders[edit]
If your LS-DYNA license was purchased from CMC then set <port> for their ANSYS License Server as follows:
Beluga: export ANSYSLMD_LICENSE_FILE=6624@10.20.73.21 Cedar: export ANSYSLMD_LICENSE_FILE=6624@172.16.121.25 Graham: export ANSYSLMD_LICENSE_FILE=6624@199.241.167.222 Narval: export ANSYSLMD_LICENSE_FILE=6624@10.100.64.10 Niagara: export ANSYSLMD_LICENSE_FILE=6624@172.16.205.199
where the various IP address correspond to the respective CADpass servers.
Initial Setup and Testing[edit]
If your license server has never been used on the cluster where you plan to run jobs, firewall changes will first need to be done on both the cluster side and server side (except if you are using a CMC license server). To arrange this, send an email containing the service port and IP address of your floating license server to Technical support. To check if your license file is working run the following commands:
module load ls-dyna
ls-dyna_s or ls-dyna_d
where its not necessary to specify any input file or arguments to run this test. The output header should contain a (non-empty) value for Licensed to:
with the exception of CMC license servers. Press ^C to quit the program and return to the command line.
Cluster batch job submission[edit]
LS-DYNA provides binaries for running jobs on a single compute node (SMP - Shared Memory Parallel using OpenMP) or across a multiple compute nodes (MPP - Message Passing Parallel using MPI). This section provides slurm scripts for each job type.
Single node jobs[edit]
Modules for running jobs on a single compute node can be listed with: module spider ls-dyna
. Jobs may be submitted to the queue with: sbatch script-smp.sh
. The following sample slurm script shows how to to run LS-DYNA with 8 cores on a single cluster compute node. The AUTO setting allows explicit simulations to allocate more memory than the 100M word size default at runtime:
#!/bin/bash
#SBATCH --account=def-account # Specify
#SBATCH --time=0-03:00 # D-HH:MM
#SBATCH --cpus-per-task=8 # Specify number of cores
#SBATCH --mem=16G # Specify total memory
#SBATCH --nodes=1 # Do not change
#export RSNT_ARCH=avx2 # Uncomment on beluga for versions < 14.1.0
module load StdEnv/2020 # Versions < 14.1.0
module load ls-dyna/13.1.1
#module load StdEnv/2023 # Versions > 13.1.1 (coming soon)
#module load ls-dyna/14.1.0
#export LSTC_LICENSE=ansys # Specify an ANSYS License Server
#export ANSYSLMD_LICENSE_FILE=<port>@<server>
export LSTC_MEMORY=AUTO
ls-dyna_s ncpu=$SLURM_CPUS_ON_NODE i=airbag.deploy.k memory=100M
where ls-dyna_s = single precision smp solver ls-dyna_d = double precision smp solver
Multiple node jobs[edit]
The module versions available for running jobs on multiple nodes can be listed with module spider ls-dyna-mpi
. To submit jobs to the queue use: sbatch script-mpp.sh
. Sample scripts for submitting jobs to a specified total number of whole nodes *OR* a specified total number of cores follow:
Specify node count[edit]
Jobs can be submitted to a specified number of whole compute nodes with the following script:
#!/bin/bash
#SBATCH --account=def-account # Specify
#SBATCH --time=0-03:00 # D-HH:MM
#SBATCH --ntasks-per-node=32 # Specify number cores per node (graham 32 or 44, cedar 48, beluga 40)
#SBATCH --nodes=2 # Specify number compute nodes (1 or more)
#SBATCH --mem=0 # Use all memory per compute node (do not change)
#export RSNT_ARCH=avx2 # Uncomment on beluga for versions < 14.1.0
module load StdEnv/2020 # Versions < 14.1.0
module load ls-dyna-mpi/13.1.1
#module load StdEnv/2023 # Versions > 13.1.1 (coming soon)
#module load ls-dyna-mpi/14.1.0
#export LSTC_LICENSE=ansys # Specify an ANSYS License Server
#export ANSYSLMD_LICENSE_FILE=<port>@<server>
export LSTC_MEMORY=AUTO
srun ls-dyna_d i=airbag.deploy.k memory=8G memory2=200M
where ls-dyna_s = single precision mpp solver ls-dyna_d = double precision mpp solver
Specify core count[edit]
Jobs can be submitted to an arbitrary number of compute nodes by specifying the number of cores. This approach allows the slurm scheduler to determine the optimal number of compute nodes to help minimize job wait time in the queue. Memory limits are applied per core therefore a sufficiently large value of mem-per-cpu must be specified so the master processor can successfully distribute and manage the computations. Note that requesting a total amount of memory instead of using the mem-per-cpu option may not be as efficient as the other job submission methods described so far.
#!/bin/bash
#SBATCH --account=def-account # Specify
#SBATCH --time=0-03:00 # D-HH:MM
#SBATCH --ntasks=64 # Specify total number of cores
#SBATCH --mem-per-cpu=2G # Specify memory per core
#export RSNT_ARCH=avx2 # Uncomment on beluga for versions < 14.1.0
module load StdEnv/2020 # Versions < 14.1.0
module load ls-dyna-mpi/13.1.1
#module load StdEnv/2023 # Versions > 13.1.1 (coming soon)
#module load ls-dyna-mpi/14.1.0
#export LSTC_LICENSE=ansys # Specify an ANSYS License Server
#export ANSYSLMD_LICENSE_FILE=<port>@<server>
export LSTC_MEMORY=AUTO
srun ls-dyna_d i=airbag.deploy.k
where ls-dyna_s = single precision mpp solver ls-dyna_d = double precision mpp solver
Depending on the simulation LS-DYNA maybe unable to efficiently use many cores. Therefore before running a simulation test the scaling properties of the simulation by gradually increasing the number of cores to determine the optimal value before simulation slowdown occurs. To determine the Job Wall-clock time, CPU Efficiency and Memory Efficiency for a successfully completed job use the seff jobnumber command.
Visualization with LS-PrePost[edit]
LSTC provides LS-PrePost for pre- and post-processing of LS-DYNA models. This program is made available by a separate module. It does not require a license and can be used on any cluster node or the Graham VDI nodes:
Cluster nodes[edit]
Connect to a compute node or to a login node with TigerVNC and open a terminal:
module load StdEnv/2020 module load ls-prepost/4.8 lsprepost module load ls-prepost/4.9 lsprepost OR lspp49
VDI nodes[edit]
Connect to gra-vdi with TigerVNC and open a new terminal:
module load CcEnv StdEnv/2020 module load ls-prepost/4.8 lsprepost module load ls-prepost/4.9 lsprepost OR lspp49