Ansys: Difference between revisions

Marked this version for translation
mNo edit summary
(Marked this version for translation)
 
Line 936: Line 936:
</tab>
</tab>


<!--T:6769-->
</tabs>
</tabs>


=== UDFs === <!--T:520-->
=== UDFs === <!--T:520-->


<!--T:6770-->
The first step is to transfer your User-Defined Function or UDF (namely the sampleudf.c source file and any additional dependency files) to the cluster.  When uploading from a windows machine be sure the text mode setting of your transfer client is used otherwise fluent won't be able to read the file properly on the cluster since it runs linux.  The UDF should be placed in the directory where your journal, cas and dat files reside.  Next add one of the following commands into your journal file before the commands that read in your simulation cas/dat files.  Regardless of whether you use the Interpreted or Compiled UDF approach,  before uploading your cas file onto the Alliance please check that neither the Interpreted UDFs Dialog Box or the UDF Library Manager Dialog Box are configured to use any UDF, this will ensure that when jobs are submitted only the journal file commands will be in control.
The first step is to transfer your User-Defined Function or UDF (namely the sampleudf.c source file and any additional dependency files) to the cluster.  When uploading from a windows machine be sure the text mode setting of your transfer client is used otherwise fluent won't be able to read the file properly on the cluster since it runs linux.  The UDF should be placed in the directory where your journal, cas and dat files reside.  Next add one of the following commands into your journal file before the commands that read in your simulation cas/dat files.  Regardless of whether you use the Interpreted or Compiled UDF approach,  before uploading your cas file onto the Alliance please check that neither the Interpreted UDFs Dialog Box or the UDF Library Manager Dialog Box are configured to use any UDF, this will ensure that when jobs are submitted only the journal file commands will be in control.


==== Interpreted ==== <!--T:521-->
==== Interpreted ==== <!--T:521-->


<!--T:6771-->
To tell fluent to interpret your UDF at runtime add the following command line into your journal file before the cas/dat files are read or initialized. The filename sampleudf.c should be replaced with the name of your source file.  The command remains the same regardless if the simulation is being run in serial or parallel.  To ensure the UDF can be found in the same directory as the journal file remove any managed definitions from the cas file by opening it in the gui and resaving either before uploading to the Alliance or opening it in the gui on a compute node or gra-vdi then resaving it.  Doing this will ensure only the following command/method will be in control when fluent runs.  To use a interpreted UDF with parallel jobs it will need to be parallelized as described in the section below.
To tell fluent to interpret your UDF at runtime add the following command line into your journal file before the cas/dat files are read or initialized. The filename sampleudf.c should be replaced with the name of your source file.  The command remains the same regardless if the simulation is being run in serial or parallel.  To ensure the UDF can be found in the same directory as the journal file remove any managed definitions from the cas file by opening it in the gui and resaving either before uploading to the Alliance or opening it in the gui on a compute node or gra-vdi then resaving it.  Doing this will ensure only the following command/method will be in control when fluent runs.  To use a interpreted UDF with parallel jobs it will need to be parallelized as described in the section below.


  define/user-defined/interpreted-functions "sampleudf.c" "cpp" 10000 no
  <!--T:6772-->
define/user-defined/interpreted-functions "sampleudf.c" "cpp" 10000 no


==== Compiled ==== <!--T:522-->
==== Compiled ==== <!--T:522-->


<!--T:6773-->
To use this approach your UDF must be compiled on an alliance cluster at least once.  Doing so will create a libudf subdirectory structure containing the required <code>libudf.so</code> shared library.  The libudf directory cannot simply be copied from a remote system (such as your laptop) to the Alliance since the library dependencies of the shared library will not be satisfied resulting in fluent crashing on startup.  That said once you have compiled your UDF on one Alliance cluster you can transfer the newly created libudf to any other Alliance cluster providing your account there loads the same StdEnv environment module version.  Once copied, the UDF can be used by uncommenting the second (load) libudf line below in your journal file when submitting jobs to the cluster.  Both (compile and load) libudf lines should not be left uncommented in your journal file when submitting jobs on the cluster otherwise your UDF will automatically (re)compiled for each and every job.  Not only is this highly inefficient, but also it will lead to racetime-like build conflicts if multiple jobs are run from the same directory. Besides configuring your journal file to build your UDF, the fluent gui (run on any cluster compute node or gra-vdi) may also be used.  To do this one would navigate to the Compiled UDFs Dialog Box, add the UDF source file and click Build.  When using a compiled UDF with parallel jobs your source file should be parallelized as discussed in the section below.
To use this approach your UDF must be compiled on an alliance cluster at least once.  Doing so will create a libudf subdirectory structure containing the required <code>libudf.so</code> shared library.  The libudf directory cannot simply be copied from a remote system (such as your laptop) to the Alliance since the library dependencies of the shared library will not be satisfied resulting in fluent crashing on startup.  That said once you have compiled your UDF on one Alliance cluster you can transfer the newly created libudf to any other Alliance cluster providing your account there loads the same StdEnv environment module version.  Once copied, the UDF can be used by uncommenting the second (load) libudf line below in your journal file when submitting jobs to the cluster.  Both (compile and load) libudf lines should not be left uncommented in your journal file when submitting jobs on the cluster otherwise your UDF will automatically (re)compiled for each and every job.  Not only is this highly inefficient, but also it will lead to racetime-like build conflicts if multiple jobs are run from the same directory. Besides configuring your journal file to build your UDF, the fluent gui (run on any cluster compute node or gra-vdi) may also be used.  To do this one would navigate to the Compiled UDFs Dialog Box, add the UDF source file and click Build.  When using a compiled UDF with parallel jobs your source file should be parallelized as discussed in the section below.


  define/user-defined/compiled-functions compile libudf yes sampleudf.c "" ""
  <!--T:6774-->
define/user-defined/compiled-functions compile libudf yes sampleudf.c "" ""


<!--T:6775-->
and/or
and/or


  define/user-defined/compiled-functions load libudf
  <!--T:6776-->
define/user-defined/compiled-functions load libudf


==== Parallel ==== <!--T:523-->
==== Parallel ==== <!--T:523-->


<!--T:6777-->
Before a UDF can be used with a fluent parallel job (single node SMP and multi node MPI) it will need to be parallelized.  By doing this we control how/which processes (host and/or compute) run specific parts of the UDF code when fluent is run in parallel on the cluster. The instrumenting procedure involves adding compiler directives, predicates and reduction macros into your working serial UDF. Failure to do so will result in fluent running slow at best or immediately crashing at worst.  The end result will be a single UDF that runs efficiently when fluent is used in both serial and parallel mode.  The subject is described in detail under <I>Part I: Chapter 7: Parallel Considerations</I> of the Ansys 2024 <I>Fluent Customization Manual</I> which can be accessed [https://docs.alliancecan.ca/wiki/Ansys#Online_documentation here].
Before a UDF can be used with a fluent parallel job (single node SMP and multi node MPI) it will need to be parallelized.  By doing this we control how/which processes (host and/or compute) run specific parts of the UDF code when fluent is run in parallel on the cluster. The instrumenting procedure involves adding compiler directives, predicates and reduction macros into your working serial UDF. Failure to do so will result in fluent running slow at best or immediately crashing at worst.  The end result will be a single UDF that runs efficiently when fluent is used in both serial and parallel mode.  The subject is described in detail under <I>Part I: Chapter 7: Parallel Considerations</I> of the Ansys 2024 <I>Fluent Customization Manual</I> which can be accessed [https://docs.alliancecan.ca/wiki/Ansys#Online_documentation here].


Line 1,344: Line 1,353:
module load rocky/2023R2 ansys/2023R2  # only available on graham (do not change)   
module load rocky/2023R2 ansys/2023R2  # only available on graham (do not change)   


<!--T:6778-->
Rocky --simulate “mysim.rocky” --resume=0 --ncpus=$SLURM_CPUS_PER_TASK --use-gpu=0
Rocky --simulate “mysim.rocky” --resume=0 --ncpus=$SLURM_CPUS_PER_TASK --use-gpu=0
}}
}}
Line 1,366: Line 1,376:
module load rocky/2023R2 ansys/2023R2  # only available on graham (do not change)
module load rocky/2023R2 ansys/2023R2  # only available on graham (do not change)


<!--T:6779-->
Rocky --simulate “mysim.rocky” --resume=0 --ncpus=$SLURM_CPUS_PER_TASK --use-gpu=1 --gpu-num=$SLURM_GPUS_ON_NODE
Rocky --simulate “mysim.rocky” --resume=0 --ncpus=$SLURM_CPUS_PER_TASK --use-gpu=1 --gpu-num=$SLURM_GPUS_ON_NODE
}}
}}
cc_staff
1,857

edits