Ansys: Difference between revisions

Jump to navigation Jump to search
1,144 bytes added ,  2 months ago
m
no edit summary
mNo edit summary
mNo edit summary
Line 938: Line 938:
</tabs>
</tabs>


=== UDF Usage === <!--T:520-->
=== UDFs === <!--T:520-->


The first step is to transfer your UDF (the sampleudf.c and any additional required files for it to build) to the cluster.  When uploading from a windows machine be sure the text mode setting of your transfer client is used otherwise fluent won't be able to read the file properly on the cluster since it runs linux.  The UDF should be placed in the directory where your journal, cas and dat files reside.  Next add one of the following commands into your journal file before the commands that read in your simulation cas/dat files.   
The first step is to transfer your User-Defined Function or UDF (namely the sampleudf.c source file and any additional dependency files) to the cluster.  When uploading from a windows machine be sure the text mode setting of your transfer client is used otherwise fluent won't be able to read the file properly on the cluster since it runs linux.  The UDF should be placed in the directory where your journal, cas and dat files reside.  Next add one of the following commands into your journal file before the commands that read in your simulation cas/dat files.   


==== Interpreted UDF==== <!--T:521-->
==== Interpreted UDF==== <!--T:521-->


To tell fluent to interpret your UDF at runtime add one the following line into your journal replacing sampleudf.c with the name of your file.  Position the line before the cas/dat files are are read or initialized.  To ensure the UDF can be found in the same directory as the journal file remove any managed definitions from the cas file by opening it in the gui and resaving either before uploading to the Alliance or opening it in the gui on a compute node or gra-vdi then resaving it.  Doing this will ensure only the following commands in this section are in control.
To tell fluent to interpret your UDF at runtime add the following command line into your journal file before the cas/dat files are read or initialized. The filename sampleudf.c should be replaced with the name of your source file.  The command remains the same regardless if the simulation is being run in serial or parallel.  To ensure the UDF can be found in the same directory as the journal file remove any managed definitions from the cas file by opening it in the gui and resaving either before uploading to the Alliance or opening it in the gui on a compute node or gra-vdi then resaving it.  Doing this will ensure only the following command/method will be in control when fluent runs.  To use a interpreted UDF with parallel jobs it will need to be parallelized as described in the section below otherwise it will likely only work when submitted as a serial (1 core) job.


  define/user-defined/interpreted-functions "sampleudf.c" "cpp" 10000 no
  define/user-defined/interpreted-functions "sampleudf.c" "cpp" 10000 no
Line 950: Line 950:
==== Compiled UDF ==== <!--T:522-->
==== Compiled UDF ==== <!--T:522-->


To use this approach the UDF must be compiled on an alliance cluster at least once to create the libudf subdirectory.  If you compiled your UDF on a remote system such as your laptop or a lab machine then the corresponding libudf subdirectory cannot simply be copied onto the Alliance.  That said if you compiled your UDF on one Alliance cluster such as cedar, providing you are running the same StdEnv version on another cluster(s) such as graham then the lidudf subdirectory and its contents can be transferred and simply loaded in the journal file at runtime as shown in the following second line below.  Otherwise if you leave both of the following lines uncommented in your journal file, the UDF will be recompiled and then loaded each time you run your simulation.   Thus all of the files that are required for your UDF to successfully build must be present.  Another way to build your UDF libudf structure on the Alliance would be to open your simulation in the gui on a cluster compute node or gra-vdi then using the pulldown menusRegardless whether you use the gui or journal file approach be sure the udf is not saved inside the cas file when working in the gui otherwise it will additionally build each time you submit a job on the cluster.  Thus likewise before uploading your cas file to the Alliance you should manage your UDF in the gui and clear any instructions that define its presence in a directory and/or to be interpreted or build then resave the cas file so the journal file only used.
To use this approach the UDF must be compiled on an alliance cluster at least once to create the libudf subdirectory.  If you compiled your UDF on a remote system such as your laptop or a lab machine then the corresponding libudf subdirectory cannot simply be copied onto the Alliance.  That said once you have compiled your UDF on one Alliance cluster it can be transferred to another Alliance cluster without recompiling providing you load the same StdEnv environment. Then it can be simply loaded at runtime when submitting jobs by including only the second line below in your journal file, especially important when submitting parallel jobsSimilarly if you leave both of the following lines uncommented in your journal file, the UDF will be (re)compiled automatically and then loaded each and every time you run a job on the cluster. This would not only require additional runtime to compile the UDF unnecessarily but also impose additional complexity on every cluster since all the files required to compile your UDF would need to be present and properly setup on both/all systems.  Another way to build your UDF libudf subdirectory structure containing the <code>libudf.so</code> shared library on the Alliance would be to open your simulation in the gui on a cluster compute node (or gra-vdi) then navigate to the Compiled UDFs Dialog Box, add your UDF source file and click BuildBefore saving your cas file however verify its not defined in the Interpreted UDFs Dialog Box or the UDF Library Manager Dialog Box to ensure only the following journal file commands will control it.  To use a compiled UDF with parallel jobs it will need to be parallelized as described in the section below otherwise it will likely only work when submitted as a serial (1 core) job.


  define/user-defined/compiled-functions compile libudf yes sampleudf.c "" ""
  define/user-defined/compiled-functions compile libudf yes sampleudf.c "" ""
Line 957: Line 957:


  define/user-defined/compiled-functions load libudf
  define/user-defined/compiled-functions load libudf
==== Parallel Use  ==== <!--T:523-->
Most UDFs are created and tested in serial mode, however must be parallelized when submitting single node Shared Memory and multi-node MPI parallel otherwise the simulation will likely crash at worst or run very slow at best.  The topic is well described in <I>Chapter 7:Parallel Considerations</I> of the Ansys official 2024 Fluent Documentation Manual section which can be accessed by following the steps given [https://docs.alliancecan.ca/wiki/Ansys#Online_documentation here] for further information.


== Ansys CFX == <!--T:78-->
== Ansys CFX == <!--T:78-->
cc_staff
1,894

edits

Navigation menu