Ansys: Difference between revisions

Jump to navigation Jump to search
288 bytes removed ,  2 months ago
m
no edit summary
mNo edit summary
mNo edit summary
Line 950: Line 950:
==== Compiled UDF ==== <!--T:522-->
==== Compiled UDF ==== <!--T:522-->


To use this approach the UDF must be compiled on an alliance cluster at least once to create the libudf subdirectoryIf you compiled your UDF on a remote system such as your laptop or a lab machine then the corresponding libudf subdirectory cannot simply be copied onto the Alliance.  That said once you have compiled your UDF on one Alliance cluster it can be transferred to another Alliance cluster without recompiling providing you load the same StdEnv environment. Then it can be simply loaded at runtime when submitting jobs by including only the second line below in your journal file, especially important when submitting parallel jobs.  Similarly if you leave both of the following lines uncommented in your journal file, the UDF will be (re)compiled automatically and then loaded each and every time you run a job on the clusterThis would not only require additional runtime to compile the UDF unnecessarily but also impose additional complexity on every cluster since all the files required to compile your UDF would need to be present and properly setup on both/all systems.  Another way to build your UDF libudf subdirectory structure containing the <code>libudf.so</code> shared library on the Alliance would be to open your simulation in the gui on a cluster compute node (or gra-vdi) then navigate to the Compiled UDFs Dialog Box, add your UDF source file and click Build.  Before saving your cas file however verify its not defined in the Interpreted UDFs Dialog Box or the UDF Library Manager Dialog Box to ensure only the following journal file commands will control itTo use a compiled UDF with parallel jobs it will need to be parallelized as described in the section below otherwise it will likely only work when submitted as a serial (1 core) job.
To use this approach your UDF must be compiled on an alliance cluster at least once.  Doing so will create a libudf subdirectory structure containing the required <code>libudf.so</code> shared libraryThe libudf directory cannot simply be copied from a remote system (such as your laptop) to the Alliance since the library dependencies of the shared library will not be satisfied resulting in fluent crashing on startup.  That said once you have compiled your UDF on one Alliance cluster you can transfer the newly created libudf to any other Alliance cluster providing your account there loads the same StdEnv environment module version. Once copied, the UDF can be used by uncommenting the second (load) libudf line below in your journal file when submitting jobs to the clusterBoth (compile and load) libudf lines should not be left uncommented in your journal file when submitting jobs on the cluster otherwise your UDF will automatically (re)compiled for each and every job.  Not only is this highly inefficient, but also it will lead to racetime-like build conflicts if multiple jobs are run from the same directory. Besides configuring your journal file to build your UDF, the fluent gui (run on any cluster compute node or gra-vdi) may also be used.  To do this one would navigate to the Compiled UDFs Dialog Box, add the UDF source file and click Build.  When using a compiled UDF with parallel jobs your source file should be parallelized as discussed in the section below.


  define/user-defined/compiled-functions compile libudf yes sampleudf.c "" ""
  define/user-defined/compiled-functions compile libudf yes sampleudf.c "" ""
cc_staff
1,894

edits

Navigation menu