Star-CCM+/fr: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Created page with "= Exécuter STAR-CCM+ sur nos serveurs = Sélectionnez l’un des modules disponibles, selon vos besoins : * <tt>starccm</tt> pour le format double précision, * <tt>starccm-...")
(Created page with "Deux distributions MPI peuvent être employées : *[https://www.ibm.com/developerworks/downloads/im/mpi/index.html IBM Platform MPI] est employée par défaut, mais n’est ce...")
Line 12: Line 12:
* <tt>starccm-mixed</tt> pour le format précision mixte.
* <tt>starccm-mixed</tt> pour le format précision mixte.


Star-CCM+ comes bundled with two different distributions of MPI:
Deux distributions MPI peuvent être employées :
*[https://www.ibm.com/developerworks/downloads/im/mpi/index.html IBM Platform MPI] is the default distribution, but does not work on [[Cedar]]'s Intel OmniPath network fabric;
*[https://www.ibm.com/developerworks/downloads/im/mpi/index.html IBM Platform MPI] est employée par défaut, mais n’est cependant pas compatible avec le réseau OmniPath sur [[Cedar/fr|Cedar]];
*[https://software.intel.com/en-us/intel-mpi-library Intel MPI] is specified with option <tt>-mpi intel</tt>.  
*[https://software.intel.com/en-us/intel-mpi-library Intel MPI] est indiquée avec l'option <tt>-mpi intel</tt>.  


Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which host to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and <tt>--cpus-per-task=32</tt> when submitting a job.  
Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell <tt>starccm+</tt> which host to use by means of a file containing the list of available hosts. To produce this file, we provide the <tt>slurm_hl2hl.py</tt> script, which will output the list of hosts when called with the option <tt>--format STAR-CCM+</tt>. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options <tt>--ntasks-per-node=1</tt> and <tt>--cpus-per-task=32</tt> when submitting a job.  

Revision as of 00:00, 16 December 2017

Other languages:

STAR-CCM+ est une solution logicielle qui fournit des technologies multidisciplinaires précises et efficientes dans un même environnement intégré; elle est développée par Siemens.

Limites de la licence

Les binaires STAR-CCM+ sont installés sur nos serveurs, mais nous n'avons pas de licence pour nos utilisateurs; ceux-ci doivent donc posséder leur propre licence.

Exécuter STAR-CCM+ sur nos serveurs

Sélectionnez l’un des modules disponibles, selon vos besoins :

  • starccm pour le format double précision,
  • starccm-mixed pour le format précision mixte.

Deux distributions MPI peuvent être employées :

  • IBM Platform MPI est employée par défaut, mais n’est cependant pas compatible avec le réseau OmniPath sur Cedar;
  • Intel MPI est indiquée avec l'option -mpi intel.

Neither IBM Platform MPI nor Intel MPI are tightly coupled with our scheduler; you must therefore tell starccm+ which host to use by means of a file containing the list of available hosts. To produce this file, we provide the slurm_hl2hl.py script, which will output the list of hosts when called with the option --format STAR-CCM+. This list can then be written to a file and read by Star-CCM+. Also, because these distributions of MPI are not tightly integrated with our scheduler, you should use options --ntasks-per-node=1 and --cpus-per-task=32 when submitting a job.

You will also need to set up your job environment to use your license. If you are using Adapco's online "pay-on-usage" server, the configuration is rather simple. If you are using an internal license server, please contact us so that we can help you setup the access to it. When all is done, your submit script should look like this, where 2 nodes are used for 1 hour; you can adjust these numbers to fit your needs.

File : mysub.sh

#!/bin/bash
#SBATCH --time=01:00:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=32
module load starccm/12.04.011-R8
export LM_PROJECT='YOUR ADAPCO PROJECT ID GOES HERE'
export CDLMD_LICENSE_FILE="1999@flex.cd-adapco.com"

slurm_hl2hl.py --format STAR-CCM+ > machinefile

NCORE=$((SLURM_NTASKS * SLURM_CPUS_PER_TASK))

starccm+ -power -np $NCORE -machinefile machinefile -batch -mpi intel /path/to/your/simulation/file