LS-DYNA: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
mNo edit summary
No edit summary
Line 27: Line 27:
== Cluster Batch Job Submission == <!--T:20-->
== Cluster Batch Job Submission == <!--T:20-->


LS-DYNA provides binaries for running jobs on single node shared memory parallel systems as well as over multiple nodes to distribute the memory and computations through the use of mpi. Template instructions for single node use are provided below.  Instructions for running jobs across multiple nodes are being prepared and will also be posted below as soon as they are ready.
LS-DYNA provides software for running jobs on a single node using SMP (Shared Memory Parallel) binaries and multiple nodes using MPP (Message Passing Parallel) binaries.  Parallelization is achieved with OpenMP and MPI respectively.  Starting with version 12.0 the StdEnv/2020 module must be loaded before the ls-dyna or ls-dyna-mpi module is loaded. Sample slurm scripts are shown below:


=== Running LS-DYNA on a single node === <!--T:22-->
=== Single Node Slurm Script=== <!--T:22-->


The modules available for running a job on a single node with 1 or more cores can be listed with: <code>module spider ls-dyna</code>.  Job can be submitted to the queue with: <code>sbatch myscript.sh</code>. The following is a sample submission script for running LS-DYNA with eight cores on a single cluster compute node:
<!--T:23-->
Modules for running jobs on a single node can be listed with: <code>module spider ls-dyna</code>.  Jobs maybe submitted to the queue with: <code>sbatch script-smp.sh</code>. The following is a sample submission script for running LS-DYNA with eight cores on a single cluster compute node.
{{File
{{File
|name=myscript.sh
|name=script-smp.sh
|lang="bash"
|lang="bash"
|contents=
|contents=
#!/bin/bash
#!/bin/bash
#SBATCH --account=some-account # Specify
#SBATCH --account=def-account # Specify
#SBATCH --time=0-03:00        # d-hh:mm
#SBATCH --time=0-03:00        # D-HH:MM
#SBATCH --mem=2G              # Change
#SBATCH --cpus-per-task=8     # Number of cores
#SBATCH --cpus-per-task=4     # Change
#SBATCH --mem=16G             # Memory per node
#SBATCH --nodes=1             # Do not change


module load ls-dyna/11.0
module load StdEnv/2016.4
module load ls-dyna/11.1


ls-dyna_s i=airbag.deploy.k ncpu=$SLURM_CPUS_ON_NODE
ls-dyna_s ncpu=$SLURM_CPUS_ON_NODE i=airbag.deploy.k
}} where  
}} where  
  ls-dyna_s - single precision solver
  ls-dyna_s = single precision smp solver
  ls-dyna_d - double precision solver
  ls-dyna_d = double precision smp solver


<!--T:24-->
=== Multiple Node Slurm Script === <!--T:24-->
Depending on the complexity of the simulation, LS-DYNA may not be able to efficiently use very many cores.  Therefore before running a simulation to completion, test the scaling properties of the simulation by gradually increasing the number of cores in <tt>#SBATCH --cpus-per-task=X</tt> from X=1 to determine the optimum value.  To determine the MEMORY and CPU Efficieny of a successfully completed job use the <tt>seff jobnumber</tt> command, very good values are greater than 80% for each.
 
<!--T:25-->
Modules for running jobs on multiple nodes can be listed with: <code>module spider ls-dyna</code>.  Jobs maybe submitted to the queue with: <code>sbatch script-mpp.sh</code>. The following is a sample submission script for running LS-DYNA with eight cores across multiple  cluster compute nodes where number of nodes used is determined by the schedular:
{{File
|name=script-mpp.sh
|lang="bash"
|contents=
#!/bin/bash
#SBATCH --account=def-account  # Specify
#SBATCH --time=0-03:00        # D-HH:MM
#SBATCH --ntasks=8            # Number of cores
#SBATCH --mem-per-cpu=2G      # Memory per core
 
module load StdEnv/2020
module load ls-dyna-mpi/12.0
 
srun ls-dyna_d i=airbag.deploy.k
}} where
ls-dyna_s = single precision mpp solver
ls-dyna_d = double precision mpp solver
 
<!--T:26-->
Depending on the complexity of the simulation, LS-DYNA may not be able to efficiently use very many cores efficiently.  Therefore before running a simulation to completion, test the scaling properties of the simulation by gradually increasing the number of cores to determine the optimal value before simulation slowdown occurs.  To determine the Job Wall-clock time, CPU Efficiency and Memory Efficiency for a successfully completed job use the <tt>seff jobnumber</tt> command.


== Remote visualization == <!--T:26-->
== Remote visualization == <!--T:26-->

Revision as of 04:09, 20 October 2020

Other languages:

Introduction[edit]

LS-DYNA is available on all Compute Canada systems. It is used for many applications to solve problems in multi-physics, solid mechanics, heat transfer and fluid dynamics either as separate phenomena or as coupled physic such as thermal stress or fluid structure interaction. LSTC was recently purchased by ANSYS therefore the software maay eventually be provided as part of the ansys module. For now we recommended using the the ls-dyna module as documented in the below wiki page.

Licensing[edit]

Compute Canada is a hosting provider for LS-DYNA. This means that we have LS-DYNA software installed on our clusters, but we do not provide a generic license accessible to everyone. However, many institutions, faculties, and departments already have licenses that can be used on our clusters. Researchers can also purchase a dedicated license directly from the company for use on Compute Canada systems to run on a Sharnet license server.

Once a license is setup, there will be remaining technical aspects. The license server on your end will need to be reachable by our compute nodes. This will require our technical team to get in touch with the technical people managing your license software. In some cases, this has already been done. You should then be able to load a ls-dyna Compute Canada modules, and it should find your license automatically. For assistance please contact our Technical support.

Configuring your license file[edit]

Our module for LS-DYNA is designed to look for license information in a few places. One of those places is your home folder. If you have your own license server, you can write the information to access it in the following format:

File : ls-dyna.lic

#LICENSE_TYPE: network
#LICENSE_SERVER:<port>@<server>

and put this file in the folder $HOME/.licenses/ on each cluster where you plan to submit ls-dyna jobs. Note that firewall changes will need to be done on both our side and your side. To arrange this, send an email containing the service port and ip address of your floating license server to Technical support. To check your new license file is responding run the following commands:

module load ls-dyna
ls-dyna_s

The output should contain defined (non-empty) values for Licensed to: and Issued by: when done press ^c to exit.

Cluster Batch Job Submission[edit]

LS-DYNA provides software for running jobs on a single node using SMP (Shared Memory Parallel) binaries and multiple nodes using MPP (Message Passing Parallel) binaries. Parallelization is achieved with OpenMP and MPI respectively. Starting with version 12.0 the StdEnv/2020 module must be loaded before the ls-dyna or ls-dyna-mpi module is loaded. Sample slurm scripts are shown below:

Single Node Slurm Script[edit]

Modules for running jobs on a single node can be listed with: module spider ls-dyna. Jobs maybe submitted to the queue with: sbatch script-smp.sh. The following is a sample submission script for running LS-DYNA with eight cores on a single cluster compute node.

File : script-smp.sh

#!/bin/bash
#SBATCH --account=def-account  # Specify
#SBATCH --time=0-03:00         # D-HH:MM
#SBATCH --cpus-per-task=8      # Number of cores
#SBATCH --mem=16G              # Memory per node

module load StdEnv/2016.4
module load ls-dyna/11.1

ls-dyna_s ncpu=$SLURM_CPUS_ON_NODE i=airbag.deploy.k
where 
ls-dyna_s = single precision smp solver
ls-dyna_d = double precision smp solver

Multiple Node Slurm Script[edit]

Modules for running jobs on multiple nodes can be listed with: module spider ls-dyna. Jobs maybe submitted to the queue with: sbatch script-mpp.sh. The following is a sample submission script for running LS-DYNA with eight cores across multiple cluster compute nodes where number of nodes used is determined by the schedular:

File : script-mpp.sh

#!/bin/bash
#SBATCH --account=def-account  # Specify
#SBATCH --time=0-03:00         # D-HH:MM
#SBATCH --ntasks=8             # Number of cores
#SBATCH --mem-per-cpu=2G       # Memory per core

module load StdEnv/2020
module load ls-dyna-mpi/12.0

srun ls-dyna_d i=airbag.deploy.k
where 
ls-dyna_s = single precision mpp solver
ls-dyna_d = double precision mpp solver

Depending on the complexity of the simulation, LS-DYNA may not be able to efficiently use very many cores efficiently. Therefore before running a simulation to completion, test the scaling properties of the simulation by gradually increasing the number of cores to determine the optimal value before simulation slowdown occurs. To determine the Job Wall-clock time, CPU Efficiency and Memory Efficiency for a successfully completed job use the seff jobnumber command.

Remote visualization[edit]

LSTC provides LS-PrePost for pre and post processing of LS-DYNA models. This program is made available by a separate module. It does not require a license and can be used on any cluster node or the graham VDI nodes by following these steps:

Cluster Nodes[edit]

Connect to a compute or login node with TigerVNC
module load ls-prepost
lsprepost

VDI Nodes[edit]

Connect to gra-vdi with TigerVNC
module load CcEnv StdEnv
module load ls-prepost
lsprepost