Allocations and compute scheduling/fr: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
No edit summary
Line 51: Line 51:
</div>
</div>


<div class="mw-translate-fuzzy">
{| class="wikitable" style="margin: auto; text-align: center;"
{| class="wikitable" style="margin: auto; text-align: center;"
|-
|-
Line 59: Line 58:
! scope="col"| Score mémoire
! scope="col"| Score mémoire
! scope="col"| Score pondéré
! scope="col"| Score pondéré
! scope="col"| Score combiné
! colspan="2",scope="col"| Disponible
! scope="col"| Alloué par concours
|-
|-
! scope="col"| Poids:
! scope="col"| COEFFICIENT
! scope="col"| 1.6
! scope="col"| 1.6
! scope="col"| 1.6
! scope="col"| 1.6
! scope="col"| 0.8
! scope="col"| 0.8
| (UGR)
! scope="col"| (UGR)
! scope="col"| Présentement
! scope="col"| Pour 2025
! scope="col"| Concours de 2025
|-
|-
! scope="row" style="text-decoration: underline;"| Modèle
|-
|-
! scope="row"| P100-12gb
! scope="row" | H100-80gb
| 0.48
| 3.44 || 3.17 || 2.0 || 12.2 || non ||  oui || oui
| 0.00
|-
| 0.3
! scope="row"| A100-80gb
! 1.0
| 1.00 || 1.00 || 2.0 ||  4.8 || non ||    ? || non
|-
|-
! scope="row"| P100-16gb
! scope="row"| A100-40gb
| 0.48
| <b>1.00</b> || <b>1.00</b> || <b>1.0</b> || <b>4.0</b> || Yes || Yes || Yes
| 0.00
| 0.4
! 1.1
|-
|-
! scope="row"| T4-16gb
! scope="row"| V100-32gb
| 0.42
| 0.81 || 0.40 || 0.8 || 2.6 || oui ||  ? || non
| 0.21
| 0.4
! 1.3
|-
|-
! scope="row"| V100-16gb
! scope="row"| V100-16gb
| 0.81
| 0.81 || 0.40 || 0.4 || 2.2 || oui ||  ? || non
| 0.40
| 0.4
! 2.2
|-
|-
! scope="row"| V100-32gb
! scope="row"| T4-16gb
| 0.81
| 0.42 || 0.21 || 0.4 || 1.3 || oui ||  ? || non
| 0.40
| 0.8
! 2.6
|-
|-
! scope="row"| A100-40gb
! scope="row"| P100-16gb
| <b>1.00</b>
| 0.48 || 0.03 || 0.4 || 1.1 || oui || non || non
| <b>1.00</b>
| <b>1.0</b>
! 4.0
|-
|-
! scope="row"| A100-80gb*
! scope="row"| P100-12gb
| 1.00
| 0.48 || 0.03 || 0.3 || 1.0 || oui || non || non
| 1.00
| 2.0
! 4.8
|}
|}
</div>


With the 2025 [[infrastructure renewal]] it will become possible to schedule a fraction of a GPU using [[multi-instance GPU]] technology.  Different jobs, potentially belonging to different users, can run on the same GPU at the same time.  Following [https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#terminology NVidia's terminology], a fraction of a GPU allocated to a single job is called a "GPU instance", also sometimes called a "MIG instance".   
With the 2025 [[infrastructure renewal]] it will become possible to schedule a fraction of a GPU using [[multi-instance GPU]] technology.  Different jobs, potentially belonging to different users, can run on the same GPU at the same time.  Following [https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#terminology NVidia's terminology], a fraction of a GPU allocated to a single job is called a "GPU instance", also sometimes called a "MIG instance".   
rsnt_translations
56,430

edits

Navigation menu