Using cloud GPUs: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
m (Rdickson moved page Using cloud gpu to Using cloud GPUs: proper capitalization)
(Undo revision 163551 by Rdickson (talk))
Tag: Undo
 
(11 intermediate revisions by 4 users not shown)
Line 2: Line 2:
<languages />
<languages />
<translate>
<translate>
== How to use GPU in cloud VMs ==


This howto describes the steps needed to allocate GPU resources to a virtual machine (VM), installing the necessary drivers as well as simple steps to on what to check to see that the GPU is properly allocated and cannow be used.
<!--T:2-->
This guide describes how to allocate GPU resources to a virtual machine (VM), installing the necessary drivers and checking whether the GPU can be used.


To use a GPU within a VM, the instance needs to be deployed with on for the flavors listed below, to make the GPU available to the Operating System via the PCI bus.
== Supported flavors == <!--T:23-->


<!--T:3-->
To use a GPU within a VM, the instance needs to be deployed on one of the flavors listed below.  The GPU will be available to the operating system via the PCI bus.
<!--T:4-->
* g2-c24-112gb-500
* g2-c24-112gb-500
* g1-c14-56gb-500
* g1-c14-56gb-500
* g1-c14-56gb
* g1-c14-56gb


== Preparing a Debian 10 Instance ==
== Preparing a Debian 10 instance == <!--T:5-->
To use the GPU via the PCI bus, the proprietary Nvidia drivers are required. Due to Debian's policy, the drivers are available from the non-free pool only.


===== <u>Enabling the non-free pool</u> =====
<!--T:24-->
Log in via ssh and add the sources below to ''/etc/apt/sources.list'', if not already in there.
To use the GPU via the PCI bus, the proprietary NVIDIA drivers are required. Due to Debian's policy, the drivers are available from the non-free pool only.


===== Enable the non-free pool ===== <!--T:6-->
<!--T:25-->
Log in using ssh and add the lines below to ''/etc/apt/sources.list'', if they are not already there.
<!--T:7-->
<pre>
<pre>
deb http://deb.debian.org/debian buster main contrib non-free
deb http://deb.debian.org/debian buster main contrib non-free
Line 24: Line 33:
</pre>
</pre>


===== <u>Installing the Nvidia Driver</u> =====
===== Install the NVIDIA driver ===== <!--T:8-->
The following command will update the apt cache, so that apt will be aware of the new software pool sections, runs an upgrade to update the OS to the latest software versions and installs the kernel headers, the nvidia-driver and the pciutils, which will be required to list the devices connected to the PCI bus.


<!--T:26-->
The following command:
* updates the <code>apt</code> cache, so that <code>apt</code> will be aware of the new software pool sections,
* updates the OS to the latest software versions, and
* installs kernel headers, an NVIDIA driver, and <code>pciutils</code>, which will be required to list the devices connected to the PCI bus.
<!--T:9-->
<pre>
<pre>
root@gpu2:~# apt-get update && apt-get -y dist-upgrade && apt-get -y install pciutils linux-headers-`uname -r` linux-headers-amd64 nvidia-driver
root@gpu2:~# apt-get update && apt-get -y dist-upgrade && apt-get -y install pciutils linux-headers-`uname -r` linux-headers-amd64 nvidia-driver
</pre>
</pre>
After the installation has finished and the nvidia has been automatically compiled and loaded, the following steps can be used to verify that everything has been prepared to launch the ''nvidia-persistenced'', which will create the device files and makes the GPU accessible to the user space.


<!--T:27-->
If this command finishes successfully, the NVIDIA driver will have been compiled and loaded. 
<!--T:10-->
* Check if the GPU is exposed on the PCI bus
* Check if the GPU is exposed on the PCI bus
<pre>
<pre>
Line 51: Line 69:
</pre>
</pre>


* Check that the nvidia kernel module is loaded
<!--T:11-->
* Check that the <code>nvidia</code> kernel module is loaded
<pre>
<pre>
root@gpu2:~# lsmod | grep nvidia
root@gpu2:~# lsmod | grep nvidia
Line 58: Line 77:
</pre>
</pre>


Now the userspace process can be started, which will create the necessary character device files.
<!--T:12-->
* Start <code>nvidia-persistenced</code>, which will create the necessary device files and make the GPU accessible in user space.
<pre>
<pre>
root@gpu2:~# service nvidia-persistenced restart
root@gpu2:~# systemctl restart nvidia-persistenced
 
root@gpu2:~# ls -al /dev/nvidia*
root@gpu2:~# ls -al /dev/nvidia*
crw-rw-rw- 1 root root 195,  0 Mar  6 18:55 /dev/nvidia0
crw-rw-rw- 1 root root 195,  0 Mar  6 18:55 /dev/nvidia0
Line 68: Line 87:
</pre>  
</pre>  


<!--T:14-->
The GPU is now available within the user space and can be used.
The GPU is now available within the user space and can be used.


== Preparing a CentOS 7 Instance ==
== Preparing a CentOS 7 instance == <!--T:15-->
Nvidia provides repositories for various distributions, therefore the required software can be installed and maintained via these repositories.


To compile the module sources from the nvidia repository, it is necessary to install dkms to automatically build the modules on kernel updates.
<!--T:28-->
It ensures that the GPU is still working after OS updates, dkms is provided in the EPEL repository and additionally the kernel headers and the kernel source needs to be installed
NVIDIA provides repositories for various distributions, therefore the required software can be installed and maintained via these repositories.
before the nvidia driver can be set up.


===== <u>Enabling the EPEL repository and install needed software</u> =====
<!--T:16-->
To compile the module sources from the NVIDIA repository, it is necessary to install <code>dkms</code>.
This will automatically build the modules on kernel updates, and therefore ensures that the GPU is still working after any update of the OS.
<code>dkms</code> is provided in the EPEL repository.
Kernel headers and the kernel source need to be installed before the NVIDIA driver can be set up.
 
===== Enable the EPEL repository and install needed software ===== <!--T:17-->
 
<!--T:29-->
<pre>
<pre>
[root@gpu-centos centos]# yum -y update && reboot
[root@gpu-centos centos]# yum -y update && reboot
Line 83: Line 109:
</pre>
</pre>


===== <u>Adding the NVIDIA repository and install the driver package</u> =====
===== Add the NVIDIA repository and install the driver package ===== <!--T:18-->
 
<!--T:30-->
Install the <code>yum</code> repository:
<pre>
<pre>
[root@gpu-centos centos]# yum-config-manager --add-repo http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-rhel7.repo
[root@gpu-centos centos]# yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-rhel7.repo
yum install -y cuda-drivers
yum install -y cuda-drivers
</pre>
</pre>


Nvidia uses it's own gpg key to sign it's packages, yum will ask to autoimport it.
<!--T:19-->
 
NVIDIA uses its own GPG key to sign its packages.  <code>yum</code> will ask to autoimport it. Reply "y" for "yes" when prompted.
<pre>
<pre>
Retrieving key from http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/7fa2af80.pub
Retrieving key from http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/7fa2af80.pub
Line 100: Line 129:
</pre>  
</pre>  


After the installation a reboot is required to properly load the module and create the nvidia device files.
<!--T:21-->
After installation, reboot the VM to properly load the module and create the NVIDIA device files.
<pre>
<pre>
[root@gpu-centos ~]# ls -al /dev/nvidia*
[root@gpu-centos ~]# ls -al /dev/nvidia*
Line 110: Line 140:
</pre>
</pre>


<!--T:22-->
The GPU is now accessible via any user space tool.
The GPU is now accessible via any user space tool.


</translate>
</translate>
[[Category:Cloud]]

Latest revision as of 18:40, 29 October 2024

Other languages:

This guide describes how to allocate GPU resources to a virtual machine (VM), installing the necessary drivers and checking whether the GPU can be used.

Supported flavors

To use a GPU within a VM, the instance needs to be deployed on one of the flavors listed below. The GPU will be available to the operating system via the PCI bus.

  • g2-c24-112gb-500
  • g1-c14-56gb-500
  • g1-c14-56gb

Preparing a Debian 10 instance

To use the GPU via the PCI bus, the proprietary NVIDIA drivers are required. Due to Debian's policy, the drivers are available from the non-free pool only.

Enable the non-free pool

Log in using ssh and add the lines below to /etc/apt/sources.list, if they are not already there.

deb http://deb.debian.org/debian buster main contrib non-free
deb http://security.debian.org/ buster/updates main contrib non-free
deb http://deb.debian.org/debian buster-updates main contrib non-free
Install the NVIDIA driver

The following command:

  • updates the apt cache, so that apt will be aware of the new software pool sections,
  • updates the OS to the latest software versions, and
  • installs kernel headers, an NVIDIA driver, and pciutils, which will be required to list the devices connected to the PCI bus.
root@gpu2:~# apt-get update && apt-get -y dist-upgrade && apt-get -y install pciutils linux-headers-`uname -r` linux-headers-amd64 nvidia-driver

If this command finishes successfully, the NVIDIA driver will have been compiled and loaded.

  • Check if the GPU is exposed on the PCI bus
root@gpu2:~# lspci -vk
[...]
00:05.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1)
	Subsystem: NVIDIA Corporation GK210GL [Tesla K80]
	Physical Slot: 5
	Flags: bus master, fast devsel, latency 0, IRQ 11
	Memory at fd000000 (32-bit, non-prefetchable) [size=16M]
	Memory at 1000000000 (64-bit, prefetchable) [size=16G]
	Memory at 1400000000 (64-bit, prefetchable) [size=32M]
	Capabilities: [60] Power Management version 3
	Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
	Capabilities: [78] Express Endpoint, MSI 00
	Kernel driver in use: nvidia
	Kernel modules: nvidia
[...]
  • Check that the nvidia kernel module is loaded
root@gpu2:~# lsmod | grep nvidia
nvidia              17936384  0
nvidia_drm             16384  0
  • Start nvidia-persistenced, which will create the necessary device files and make the GPU accessible in user space.
root@gpu2:~# systemctl restart nvidia-persistenced
root@gpu2:~# ls -al /dev/nvidia*
crw-rw-rw- 1 root root 195,   0 Mar  6 18:55 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Mar  6 18:55 /dev/nvidiactl
crw-rw-rw- 1 root root 195, 254 Mar  6 18:55 /dev/nvidia-modeset

The GPU is now available within the user space and can be used.

Preparing a CentOS 7 instance

NVIDIA provides repositories for various distributions, therefore the required software can be installed and maintained via these repositories.

To compile the module sources from the NVIDIA repository, it is necessary to install dkms. This will automatically build the modules on kernel updates, and therefore ensures that the GPU is still working after any update of the OS. dkms is provided in the EPEL repository. Kernel headers and the kernel source need to be installed before the NVIDIA driver can be set up.

Enable the EPEL repository and install needed software
[root@gpu-centos centos]# yum -y update && reboot
yum -y install epel-release && yum -y install dkms kernel-devel-$(uname -r) kernel-headers-$(uname -r)
Add the NVIDIA repository and install the driver package

Install the yum repository:

[root@gpu-centos centos]# yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-rhel7.repo
yum install -y cuda-drivers

NVIDIA uses its own GPG key to sign its packages. yum will ask to autoimport it. Reply "y" for "yes" when prompted.

Retrieving key from http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/7fa2af80.pub
Importing GPG key 0x7FA2AF80:
 Userid     : "cudatools <cudatools@nvidia.com>"
 Fingerprint: ae09 fe4b bd22 3a84 b2cc fce3 f60f 4b3d 7fa2 af80
 From       : http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/7fa2af80.pub
Is this ok [y/N]: y

After installation, reboot the VM to properly load the module and create the NVIDIA device files.

[root@gpu-centos ~]# ls -al /dev/nvidia*
crw-rw-rw-. 1 root root 195,   0 Mar 10 20:35 /dev/nvidia0
crw-rw-rw-. 1 root root 195, 255 Mar 10 20:35 /dev/nvidiactl
crw-rw-rw-. 1 root root 195, 254 Mar 10 20:35 /dev/nvidia-modeset
crw-rw-rw-. 1 root root 241,   0 Mar 10 20:35 /dev/nvidia-uvm
crw-rw-rw-. 1 root root 241,   1 Mar 10 20:35 /dev/nvidia-uvm-tools

The GPU is now accessible via any user space tool.