CephFS

From Alliance Doc
Jump to navigation Jump to search
This site replaces the former Compute Canada documentation site, and is now being managed by the Digital Research Alliance of Canada.

Ce site remplace l'ancien site de documentation de Calcul Canada et est maintenant géré par l'Alliance de recherche numérique du Canada.

This page is a translated version of the page CephFS and the translation is 47% complete.
Outdated translations are marked like this.
Other languages:


Le système de fichiers CephFS peut être partagé par plusieurs hôtes d'instances OpenStack. Pour profiter de ce service, faites une demande à nuage@tech.alliancecan.ca.

La procédure est plutôt technique et nécessite des compétences Linux de base pour créer et modifier des fichiers, définir des permissions et créer des points de montage. Si vous avez besoin d’assistance, écrivez à nuage@tech.alliancecan.ca.

Procédure

REMARQUE : Plusieurs chaînes de caractères de l’interface OpenStack ne sont pas traduites en français.

Demander l’accès aux shares

Si vous ne disposez pas déjà d’un quota pour ce service, écrivez à nuage@tech.alliancecan.ca et indiquez :

  • le nom du projet OpenStack
  • la capacité du quota requis (en Go)
  • le nombre de shares requis

Configuration OpenStack : Créer un share CephFS

Créez un share.
Dans Project --> Share --> Shares, cliquez sur +Create Share.
Share Name = entrez un nom significatif pour votre projet (par exemple project-name-shareName)
Share Protocol = CephFS
Taille = taille requise pour le share
Share Type = cephfs
Zone de disponibilité = nova
Ne sélectionnez pas Make visible for all, autrement le share sera accessible par tous les utilisateurs dans tous les projets.
Cliquez sur le bouton Create.
Configurer CephFS avec l'interface Horizon


Créez une règle pour générer une clé.
Dans Project --> Share --> Shares --> colonne Actions, sélectionnez Manage Rules du menu déroulant.
Cliquez sur le bouton +Add Rule à droite de la page.
Access Type = cephx
Access Level = sélectionnez read-write ou read-only (vous pouvez créer plusieurs règles à plusieurs niveaux)
Access To = entrez un nom significatif pour la clé, par exemple def-project-shareName-read-write
Configuration correcte de CephFS


Prenez note des détails dont vous aurez besoin.
Dans Project --> Share --> Shares, cliquez sur le nom du share.
Dans Share Overview, notez Path.
Sous Access Rules, notez Access Key (les clés d’accès sont composées de 40 caractères et se terminent par le symbole =. Si vous ne voyez pas de clé d’accès, vous n’avez probablement pas spécifié que la règle d’accès était de type cephx).

.

Attacher le réseau CephFS à votre instance

On Arbutus

On Arbutus the cephFS network is already exposed to your VM, there is nothing to do here, go to VM configuration section.

On SD4H/Juno

On SD4H/Juno, you need to explicitly attach the cephFS network to the VM.

With the Web Gui

For each VM you need to attach, select Instance --> Action --> Attach interface select the CephFS-Network, leave the Fixed IP Address box empty.

Select CephFS Network.png


With the Openstack client

List the servers and select the id of the server you need to attach to the CephFS

$ openstack  server list 
+--------------------------------------+--------------+--------+-------------------------------------------+--------------------------+----------+
| ID                                   | Name         | Status | Networks                                  | Image                    | Flavor   |
+--------------------------------------+--------------+--------+-------------------------------------------+--------------------------+----------+
| 1b2a3c21-c1b4-42b8-9016-d96fc8406e04 | prune-dtn1   | ACTIVE | test_network=172.16.1.86, 198.168.189.3   | N/A (booted from volume) | ha4-15gb |
| 0c6df8ea-9d6a-43a9-8f8b-85eb64ca882b | prune-mgmt1  | ACTIVE | test_network=172.16.1.64                  | N/A (booted from volume) | ha4-15gb |
| 2b7ebdfa-ee58-4919-bd12-647a382ec9f6 | prune-login1 | ACTIVE | test_network=172.16.1.111, 198.168.189.82 | N/A (booted from volume) | ha4-15gb |
+--------------------------------------+--------------+--------+----------------------------------------------+--------------------------+----------+

Select the ID of the VM you want to attach, will pick the first one here and run

$ openstack  server add network 1b2a3c21-c1b4-42b8-9016-d96fc8406e04 CephFS-Network
$ openstack  server list 
+--------------------------------------+--------------+--------+---------------------------------------------------------------------+--------------------------+----------+
| ID                                   | Name         | Status | Networks                                                            | Image                    | Flavor   |
+--------------------------------------+--------------+--------+---------------------------------------------------------------------+--------------------------+----------+
| 1b2a3c21-c1b4-42b8-9016-d96fc8406e04 | prune-dtn1   | ACTIVE | CephFS-Network=10.65.20.71; test_network=172.16.1.86, 198.168.189.3 | N/A (booted from volume) | ha4-15gb |
| 0c6df8ea-9d6a-43a9-8f8b-85eb64ca882b | prune-mgmt1  | ACTIVE | test_network=172.16.1.64                                            | N/A (booted from volume) | ha4-15gb |
| 2b7ebdfa-ee58-4919-bd12-647a382ec9f6 | prune-login1 | ACTIVE | test_network=172.16.1.111, 198.168.189.82                           | N/A (booted from volume) | ha4-15gb |
+--------------------------------------+--------------+--------+------------------------------------------------------------------------+--------------------------+----------+

Nous remarquons que le réseau CephFS est attaché à la première instance.

  1. Installez les paquets requis.

    • famille Red Hat (RHEL, CentOS, Fedora, Scientific Linux, SUSE, etc.):

      1. Installez les dépôts requis pour accéder aux paquets ceph clients.
        ceph-stable (nautilus is current as of this writting)
            https://docs.ceph.com/en/nautilus/install/get-packages/
        epel (sudo yum install epel-release)
        

;Installez les paquets requis. famille Red Hat (RHEL, CentOS, Fedora, Scientific Linux, SUSE, etc.):

Installez les dépôts requis pour accéder aux paquets ceph clients. ceph-stable (nautilus is current as of this writting) https://docs.ceph.com/en/nautilus/install/get-packages/ epel (sudo yum install epel-release)

Install relevant repositories for access to ceph client packages:

File : /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-quincy/el8/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-quincy/el8/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-quincy/el8/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

The epel repo also needs to be in place sudo dnf install epel-release You can now install the ceph lib, cephfs client and other dependencies: sudo dnf install -y libcephfs2 python3-cephfs ceph-common python3-ceph-argparse ;Install the required packages for Debian family (Debian, Ubuntu, Mint, etc.): You can get the repository one you have figured out your distro {codename} with lsb_release -sc

sudo apt-add-repository 'deb https://download.ceph.com/debian-quincy/ {codename} main'

Configurer le client ceph

Once the client is installed, you can create a ceph.conf file, note the different Mon host for the different cloud.

File : /etc/ceph/ceph.conf
[global]
admin socket = /var/run/ceph/$cluster-$name-$pid.asok
client reconnect stale = true
debug client = 0/2
fuse big writes = true
mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789
[client]
quota = true
File : /etc/ceph/ceph.conf
[global]
admin socket = /var/run/ceph/$cluster-$name-$pid.asok
client reconnect stale = true
debug client = 0/2
fuse big writes = true
mon host = 10.65.0.10:6789,10.65.0.12:6789,10.65.0.11:6789
[client]
quota = true

You can find the monitor information in the share details Path field that will be used to mount the volume. If the value of the web page is different than what is seen here, it means that the wiki page is out of date. You also need to put your client name and secret in the ceph.keyring file

File : /etc/ceph/ceph.keyring
[client.MyCephFS-RW]
    key = <access Key>

Again, the access key and client name (here MyCephFS-RW) are found under access rules on your project web page, hereL Project --> Share --> Shares, click on the name of the share. ; Retrieve the connection information from the share page for your connection: : Open up the share details by clicking the name of the share in the Shares page. : Copy the entire path of the share for mounting the filesystem. ;Mount the filesystem :Create a mount point directory somewhere in your host (/cephfs, is used here)

 mkdir /cephfs

:Via kernel mount using the ceph driver. You can do a permanent mount by adding the following in the VM fstab

File : /etc/fstab
:/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ ceph name=MyCephFS-RW 0  2
File : /etc/fstab
:/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ ceph name=MyCephFS-RW,mds_namespace=cephfs_4_2,x-systemd.device-timeout=30,x-systemd.mount-timeout=30,noatime,_netdev,rw 0  2

Note: There is a non-standard/funky : before the device path, it is not a typo! The mount options are different on different systems. The namespace option is required for SD4H/Juno while other options are performance tweaks. ;It can also be done from the command line:

sudo mount -t ceph :/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ -o name=MyCephFS-RW
sudo mount -t ceph :/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ -o name=MyCephFS-RW,mds_namespace=cephfs_4_2,x-systemd.device-timeout=30,x-systemd.mount-timeout=30,noatime,_netdev,rw

;Or via ceph-fuse if the file system needs to be mounted in user space : No funky : here Installez la bibliothèque ceph-fuse.

sudo dnf install ceph-fuse

Let the fuse mount be accessible in userspace by uncommenting user_allow_other in the fuse.conf file.

File : /etc/fstab
# mount_max = 1000
user_allow_other

You can now mount cephFS in a user’s home:

mkdir ~/my_cephfs
ceph-fuse my_cephfs/ --id=MyCephFS-RW --conf=~/ceph.conf --keyring=~/ceph.keyring   --client-mountpoint=/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c

Note that the client name is here the --id. The ceph.conf and ceph.keyring content are exactly the same as for the ceph kernel mount.

Remarques

A particular share can have more than one user key provisioned for it. This allows a more granular access to the filesystem, for example, if you needed some hosts to only access the filesystem in a read-only capacity. If you have multiple keys for a share, you can add the extra keys to your host and modify the above mounting procedure. This service is not available to hosts outside of the OpenStack cluster.