Ansys: Difference between revisions

Jump to navigation Jump to search
556 bytes added ,  2 months ago
m
no edit summary
mNo edit summary
mNo edit summary
Line 1,527: Line 1,527:


<!--T:2852-->
<!--T:2852-->
As of December 2022, each researcher can run 4 jobs using a total of 252 anshpc (plus 4 anshpc per job).  Thus any of the following uniform job size combinations are possible: one 256 core job, two 130 core jobs, three 88 core jobs, or four 67 core jobs according to ( (252 + 4*num_jobs) / num_jobs ).  Since the best parallel performance is usually achieved by using all cores on packed compute nodes (aka full nodes), one can determine the number of full nodes by dividing the total anshpc cores with the compute node size.  For example, consider  Graham which has many 32-core (Broadwell) and some 44-core (Cascade) compute nodes, the maximum number of nodes that could be requested when running various size jobs on 32-core nodes would be: 256/32=8, 130/32=~4, 88/32=~2 or 67/32=~2 to run 1, 2, 3 or 4 simultaneous jobs respectively. To express this in equation form, for a given compute node size on any cluster, the number of compute nodes can be calculated by ( 252 + (4*num_jobs) ) / (num_jobs*cores_per_node) ) then round down and finally determine the total cores to request by multiplying the even number of nodes by the number of cores_per_node.
As of December 2022, each researcher can run 4 jobs using a total of 252 anshpc (plus 4 anshpc per job).  Thus any of the following uniform job size combinations are possible: one 256 core job, two 130 core jobs, three 88 core jobs, or four 67 core jobs according to ( (252 + 4*num_jobs) / num_jobs ).  UPDATE as of October 2024 the license limit has been increased to 8 jobs and 512 hpc cores per researcher (collectively across all clusters for all applications) for a testing period to allow some researchers more flexibility for parameter explorations and running larger problems.  As the license will be far more oversubscribed some instances of job failures on startup may rarely occur, in which rare case the jobs will need to be  resubmitted.  Nevertheless assuming most researchers continue with a pattern of running one or two jobs using 128 cores on average total this is not expected to be an issue.  That said it will be helpful to close Ansys applications immediately upon completion of any gui related tasks to release any licenses that maybe consumed while the application is otherwise idle, for others to use.
 
<!--T:2854-->
Since the best parallel performance is usually achieved by using all cores on packed compute nodes (aka full nodes), one can determine the number of full nodes by dividing the total anshpc cores with the compute node size.  For example, consider  Graham which has many 32-core (Broadwell) and some 44-core (Cascade) compute nodes, the maximum number of nodes that could be requested when running various size jobs on 32-core nodes assuming a 252 hpc core limit would be: 256/32=8, 130/32=~4, 88/32=~2 or 67/32=~2 to run 1, 2, 3 or 4 simultaneous jobs respectively. To express this in equation form, for a given compute node size on any cluster, the number of compute nodes can be calculated by ( 252 + (4*num_jobs) ) / (num_jobs*cores_per_node) ) then round down and finally determine the total cores to request by multiplying the even number of nodes by the number of cores_per_node.


<!--T:2853-->
<!--T:2853-->
The SHARCNET Ansys license is made available on a first come first serve basis.  Should an unusually large number of Ansys jobs be submitted on a given day some jobs could fail on startup should insufficient licenses be available.  These events however have become very rare given the recent increase in anshpc licenses. If your research requires more licenses than SHARCNET can provide, a dedicated researcher license may be purchased (and hosted) on an Ansys license server at your local institution.
The SHARCNET Ansys license is made available on a first come first serve basis.  Should an unusually large number of Ansys jobs be submitted on a given day some jobs could fail on startup should insufficient licenses be available.  If this occurs then resubmit your job asap. If your research requires more than 512 hpc cores (the recent new max limit) than open a ticket to let us know.  Most likely you will need to purchase (and host) your own Ansys license at your local institution if its urgently needed in such case contact your local [https://www.simutechgroup.com SimuTech] office for a quoteIf however over time enough researchers express the same need, acquiring a larger Ansys license on the next renewal cycle maybe possible.
 
<!--T:4774-->
If more cores are needed (beyond that which the SHARCNET license can provide per user) a research group can purchase a license directly from [https://www.simutechgroup.com SimuTech] to host on their own institutional license serverNote however that an extra 20% country-wide uplift fee must be payed if the cluster(s) to be used are not co-located at the institution.  Waterloo researchers who only use Graham will therefore be exempt since Graham is physically located at their campus.


<!--T:4775-->
<!--T:4775-->
Researchers can also purchase their own ansys license subscription from [https://www.cmc.ca/subscriptions/ CMC] and use their remote license servers.  Doing so will have several benefits 1) a local institutional license server is not needed 2) a physical license does not need to be obtained upon each renewal 3) the license can be used [https://www.cmc.ca/ansys-campus-solutions-cmc-00200-04847/ almost] anywhere including at home, institutions, or any alliance cluster across Canada and 4) download and installation instructions for the windows version of ansys are provided so researchers can run spaceclaim on their own computer (not possible on the Alliance since all systems are linux based).  There is however one potentially serious limitation to be aware of, according to the CMC [https://www.cmc.ca/qsg-ansys-cadpass-r20/ Ansys Quick Start Guides] one Ansys user may run a simulation on a maximum number of 64 cores.
Researchers can also purchase their own ansys license subscription from [https://www.cmc.ca/subscriptions/ CMC] and use their remote license servers.  Doing so will have several benefits 1) a local institutional license server is not needed 2) a physical license does not need to be obtained upon each renewal 3) the license can be used [https://www.cmc.ca/ansys-campus-solutions-cmc-00200-04847/ almost] anywhere including at home, institutions, or any alliance cluster across Canada and 4) download and installation instructions for the windows version of ansys are provided so researchers can run spaceclaim on their own computer (not possible on the Alliance since all systems are linux based).  There is however one potentially serious limitation, according to the CMC [https://www.cmc.ca/qsg-ansys-cadpass-r20/ Ansys Quick Start Guides] there maybe a 64 core limit per user.


==== License server file ==== <!--T:92-->
==== License server file ==== <!--T:92-->
cc_staff
1,894

edits

Navigation menu