Translations:Ansys/2852/en: Difference between revisions
(Importing a new version from external source) |
m (FuzzyBot moved page Translations:ANSYS/2852/en to Translations:Ansys/2852/en without leaving a redirect: Part of translatable page "ANSYS") |
(No difference)
|
Revision as of 16:59, 26 January 2023
As of Dec2023 each researcher can run 4 jobs using a total of 252 anshpc (plus 4 anshpc per job). Thus any of the following uniform job size combinations are possible: one 256 core job, two 130 core jobs, three 88 core jobs, or four 67 core jobs according to ( (252 + 4*num_jobs) / num_jobs ). Since the best parallel performance is usually achieved by using all cores on packed compute nodes (aka full nodes) one can determine the number of full nodes by dividing the total anshpc cores with the compute node size. For example, consider graham which has many 32 core (broadwell) and some 44 core (cascade) compute nodes, the maximum number of nodes that could be requested when running various size jobs on 32 core nodes would be: 256/32=8, 130/32=~4, 88/32=~2 or 67/32=~2 to run 1, 2, 3 or 4 simultaneous jobs respectively. To express this in equation form, for a given compute node size on any cluster, the number of compute nodes can be calculated by ( 252 + (4*num_jobs) ) / (num_jobs*cores_per_node) ) then round down and finally determine the total cores to request by multiplying the even number of nodes by the number of cores_per_node.