Ansys: Difference between revisions

Jump to navigation Jump to search
28 bytes removed ,  2 months ago
m
no edit summary
mNo edit summary
mNo edit summary
Line 1,207: Line 1,207:
Ansys Electronic Desktop jobs may be submitted to a cluster queue with the <code>sbatch script-name.sh</code> command using either of the following single node scripts.  As of January 2023, the scripts had only been tested on Graham and therefore may be updated in the future as required to support other clusters.  Before using them, specify the simulation time, memory, number of cores and replace YOUR_AEDT_FILE with your input file name.  A full listing of command line options can be obtained by starting Ansys EDT in [[ANSYS#Graphical_use|graphical mode]] with commands <code>ansysedt -help</code> or <code>ansysedt -Batchoptionhelp</code> to obtain scrollable graphical popups.   
Ansys Electronic Desktop jobs may be submitted to a cluster queue with the <code>sbatch script-name.sh</code> command using either of the following single node scripts.  As of January 2023, the scripts had only been tested on Graham and therefore may be updated in the future as required to support other clusters.  Before using them, specify the simulation time, memory, number of cores and replace YOUR_AEDT_FILE with your input file name.  A full listing of command line options can be obtained by starting Ansys EDT in [[ANSYS#Graphical_use|graphical mode]] with commands <code>ansysedt -help</code> or <code>ansysedt -Batchoptionhelp</code> to obtain scrollable graphical popups.   


<!--T:1577-->
<!--T:1094-->
<tabs>
<tabs>
<tab name="Single node (command line)">
<tab name="Single node (command line)">
Line 1,216: Line 1,216:
#!/bin/bash
#!/bin/bash


<!--T:2809-->
<!--T:1095-->
#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
Line 1,223: Line 1,223:
#SBATCH --nodes=1              # Request one node (Do Not Change)
#SBATCH --nodes=1              # Request one node (Do Not Change)


<!--T:2839-->
<!--T:1096-->
module load StdEnv/2020
module load StdEnv/2020
module load ansysedt/2021R2
module load ansysedt/2021R2


<!--T:2810-->
<!--T:1097-->
# Uncomment next line to run a test example:
# Uncomment next line to run a test example:
cp -f $EBROOTANSYSEDT/AnsysEM21.2/Linux64/Examples/HFSS/Antennas/TransientGeoRadar.aedt .
cp -f $EBROOTANSYSEDT/AnsysEM21.2/Linux64/Examples/HFSS/Antennas/TransientGeoRadar.aedt .
Line 1,314: Line 1,314:
</tabs>
</tabs>


== Ansys ROCKY == <!--T:109-->
== Ansys ROCKY == <!--T:110-->


<!--T:1091-->
<!--T:1101-->
Besides being able to run simulations in gui mode (as discussed in the Graphical usage section below) [https://www.ansys.com/products/fluids/ansys-rocky Ansys Rocky] can also run simulations in non-gui (or headless) mode.  Both modes support running Rocky with cpus only or with cpus and [https://www.ansys.com/blog/mastering-multi-gpu-ansys-rocky-software-enhancing-its-performance gpus].  In the below section two sample slurm scripts are  provided where each script would be submitted to the graham queue with the sbatch command as per usual.  At the time of this writing neither script has been tested and therefore extensive customization will likely be required.  Its important to note that these scripts are only usable on graham since the rocky module which they both load is only (at the present time) installed on graham (locally).
Besides being able to run simulations in gui mode (as discussed in the Graphical usage section below) [https://www.ansys.com/products/fluids/ansys-rocky Ansys Rocky] can also run simulations in non-gui (or headless) mode.  Both modes support running Rocky with cpus only or with cpus and [https://www.ansys.com/blog/mastering-multi-gpu-ansys-rocky-software-enhancing-its-performance gpus].  In the below section two sample slurm scripts are  provided where each script would be submitted to the graham queue with the sbatch command as per usual.  At the time of this writing neither script has been tested and therefore extensive customization will likely be required.  Its important to note that these scripts are only usable on graham since the rocky module which they both load is only (at the present time) installed on graham (locally).


=== Slurm scripts === <!--T:1092-->
=== Slurm scripts === <!--T:1102-->


<!--T:1093-->
<!--T:1103-->
To get a full listing of command line options run <code>Rocky -h</code> on the command line after loading any rocky module (currently only rocky/2023R2 is available on graham with 2024R1 and 2024R2 to be added asap).  In regards to using Rocky with gpus for solving coupled problems, the number of cpus you should request from slurm (on the same node) should be increased to a maximum until the scalability limit of the coupled application is reached.  On the other hand, if Rocky is being run with gpus to solve standalone uncoupled problems, then only a minimal number of cpus should be requested that will allow be sufficient for Rocky to still run optimally.  For instance only 2cpus or possibly 3cpus maybe required.  Finally when Rocky is run with more than 4cpus then <I>rocky_hpc</I> licenses will be required which the SHARCNET license does provide.
To get a full listing of command line options run <code>Rocky -h</code> on the command line after loading any rocky module (currently only rocky/2023R2 is available on graham with 2024R1 and 2024R2 to be added asap).  In regards to using Rocky with gpus for solving coupled problems, the number of cpus you should request from slurm (on the same node) should be increased to a maximum until the scalability limit of the coupled application is reached.  On the other hand, if Rocky is being run with gpus to solve standalone uncoupled problems, then only a minimal number of cpus should be requested that will allow be sufficient for Rocky to still run optimally.  For instance only 2cpus or possibly 3cpus maybe required.  Finally when Rocky is run with more than 4cpus then <I>rocky_hpc</I> licenses will be required which the SHARCNET license does provide.


<!--T:1577-->
<!--T:1104-->
<tabs>
<tabs>
<tab name="CPU only">
<tab name="CPU only">
Line 1,333: Line 1,333:
#!/bin/bash
#!/bin/bash


<!--T:2809-->
<!--T:1105-->
#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --account=account      # Specify your account (def or rrg)
#SBATCH --time=00-02:00        # Specify time (DD-HH:MM)
#SBATCH --time=00-02:00        # Specify time (DD-HH:MM)
Line 1,340: Line 1,340:
#SBATCH --nodes=1              # Request one node (do not change)
#SBATCH --nodes=1              # Request one node (do not change)


<!--T:2839-->
<!--T:1106-->
module load StdEnv/2023
module load StdEnv/2023
module load rocky/2023R2 ansys/2023R2  # only available on graham (do not change)   
module load rocky/2023R2 ansys/2023R2  # only available on graham (do not change)   


<!--T:2810-->
Rocky --simulate “mysim.rocky” --resume=0 --ncpus=$SLURM_CPUS_PER_TASK --use-gpu=0
Rocky --simulate “mysim.rocky” --resume=0 --ncpus=$SLURM_CPUS_PER_TASK --use-gpu=0
}}
}}
Line 1,355: Line 1,354:
#!/bin/bash
#!/bin/bash


<!--T:2816-->
<!--T:1107-->
#SBATCH --account=account      # Specify your account (def or reg)
#SBATCH --account=account      # Specify your account (def or reg)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
#SBATCH --time=00-01:00        # Specify time (DD-HH:MM)
Line 1,363: Line 1,362:
#SBATCH --nodes=1              # Request one node (do not change)
#SBATCH --nodes=1              # Request one node (do not change)


<!--T:2839-->
<!--T:1108-->
module load StdEnv/2023
module load StdEnv/2023
module load rocky/2023R2 ansys/2023R2  # only available on graham (do not change)
module load rocky/2023R2 ansys/2023R2  # only available on graham (do not change)


<!--T:2810-->
Rocky --simulate “mysim.rocky” --resume=0 --ncpus=$SLURM_CPUS_PER_TASK --use-gpu=1 --gpu-num=$SLURM_GPUS_ON_NODE
Rocky --simulate “mysim.rocky” --resume=0 --ncpus=$SLURM_CPUS_PER_TASK --use-gpu=1 --gpu-num=$SLURM_GPUS_ON_NODE
}}
}}
cc_staff
1,894

edits

Navigation menu