Translations:Ansys/2852/en: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Importing a new version from external source)
 
(Importing a new version from external source)
Line 1: Line 1:
As of Sep1,2022 each researcher can run 4 jobs using a total of 188 anshpc (plus 4 anshpc per job).  Thus any of the following simultaneously running maximum uniform job size combinations are possible: single 192 core job, two 98 core jobs, three 66 core jobs, or four 51 core jobs according to ( (188 + 4 * num_jobs) / num_jobs ).  Since typically best parallel performance is achieved using all cores on a compute node (aka full node) one can determine the maximum number of full nodes by dividing the total anshpc cores by the compute node size.   For example, consider  graham which has many 32 core (broadwell) and some 44 core (cascade) compute nodes.  The maximum number of nodes per job one could request when using full 32 core nodes would be: 192/32=6, 98/32=3, 66/32=2 or 51/32=1 to run 1, 2, 3 or 4 simultaneous jobs respectively. For any given compute node size, on any cluster, the number of compute nodes can be calculated from ( 188 anshpc + (4 anshpc/job * num_jobs) ) / cores_per_node ) then rounding down any remainder.
As of Dec2023 each researcher can run 4 jobs using a total of 252 anshpc (plus 4 anshpc per job).  Thus any of the following uniform job size combinations are possible: one 256 core job, two 130 core jobs, three 88 core jobs, or four 67 core jobs according to ( (252 + 4*num_jobs) / num_jobs ).  Since the best parallel performance is usually achieved by using all cores on packed compute nodes (aka full nodes) one can determine the number of full nodes by dividing the total anshpc cores with the compute node size. For example, consider  graham which has many 32 core (broadwell) and some 44 core (cascade) compute nodes, the maximum number of nodes that could be requested when running various size jobs on 32 core nodes would be: 256/32=8, 130/32=~4, 88/32=~2 or 67/32=~2 to run 1, 2, 3 or 4 simultaneous jobs respectively. To express this in equation form, for a given compute node size on any cluster, the number of compute nodes can be calculated by ( 252 + (4*num_jobs) ) / (num_jobs*cores_per_node) ) then round down and finally determine the total cores to request by multiplying the even number of nodes by the number of cores_per_node.

Revision as of 20:28, 9 January 2023

Information about message (contribute)
This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
Message definition (Ansys)
As of December 2022, each researcher can run 4 jobs using a total of 252 anshpc (plus 4 anshpc per job).  Thus any of the following uniform job size combinations are possible: one 256 core job, two 130 core jobs, three 88 core jobs, or four 67 core jobs according to ( (252 + 4*num_jobs) / num_jobs ).  UPDATE as of October 2024 the license limit has been increased to 8 jobs and 512 hpc cores per researcher (collectively across all clusters for all applications) for a testing period to allow some researchers more flexibility for parameter explorations and running larger problems.  As the license will be far more oversubscribed some instances of job failures on startup may rarely occur, in which rare case the jobs will need to be  resubmitted.  Nevertheless assuming most researchers continue with a pattern of running one or two jobs using 128 cores on average total this is not expected to be an issue.  That said it will be helpful to close Ansys applications immediately upon completion of any gui related tasks to release any licenses that maybe consumed while the application is otherwise idle, for others to use.

As of Dec2023 each researcher can run 4 jobs using a total of 252 anshpc (plus 4 anshpc per job). Thus any of the following uniform job size combinations are possible: one 256 core job, two 130 core jobs, three 88 core jobs, or four 67 core jobs according to ( (252 + 4*num_jobs) / num_jobs ). Since the best parallel performance is usually achieved by using all cores on packed compute nodes (aka full nodes) one can determine the number of full nodes by dividing the total anshpc cores with the compute node size. For example, consider graham which has many 32 core (broadwell) and some 44 core (cascade) compute nodes, the maximum number of nodes that could be requested when running various size jobs on 32 core nodes would be: 256/32=8, 130/32=~4, 88/32=~2 or 67/32=~2 to run 1, 2, 3 or 4 simultaneous jobs respectively. To express this in equation form, for a given compute node size on any cluster, the number of compute nodes can be calculated by ( 252 + (4*num_jobs) ) / (num_jobs*cores_per_node) ) then round down and finally determine the total cores to request by multiplying the even number of nodes by the number of cores_per_node.