CephFS

From Alliance Doc
Revision as of 15:36, 29 February 2024 by Poq (talk | contribs)
Jump to navigation Jump to search
Other languages:


CephFS provides a common filesystem that can be shared amongst multiple OpenStack VM hosts. Access to the service is granted via requests to cloud@tech.alliancecan.ca.

This is a fairly technical procedure that assumes basic Linux skills for creating/editing files, setting permissions, and creating mount points. For assistance in setting up this service, write to cloud@tech.alliancecan.ca.

Procedure

Request access to shares

If you do not already have a quota for the service, you will need to request this through cloud@tech.alliancecan.ca. In your request please provide the following:

  • OpenStack project name
  • amount of quota required in GB
  • number of shares required

Openstack Configuration: Create CephFS share

  1. Create the share.
Configuration of CephFS on Horizon Gui
    • In Project --> Share --> Shares, click on +Create Share.
    • Share Name = enter a name that identifies your project (e.g. project-name-shareName)
    • Share Protocol = CephFS
    • Size = size you need for this share
    • Share Type = cephfs
    • Availability Zone = nova
    • Do not check Make visible for all, otherwise the share will be accessible by all users in all projects.
    • Click on the Create button.
  1. Create an access rule to generate an access key.
    • In Project --> Share --> Shares --> Actions column, select Manage Rules from the drop-down menu.
    • Click on the +Add Rule button (right of page).
    • Access Type = cephx
    • Access Level = select read-write or read-only (you can create multiple rules for either access level if required)
    • Access To = select a key name that describes the key, this name is important, it will be used in the cephfs clien config on the VM, we will use MyCephFS-RW on this page.
  2. Note the share details which you will need later.
    • In Project --> Share --> Shares, click on the name of the share.
Porperly configured CephFS
    • In the Share Overview, note the three element circled in red in the "Properly configured" image: Path, wich will be used in the mount command on the VM, the Access Rules, which will be the client name and the Access Key that will let the VM's client connnect.

VM configuration: install and configure CephFS client

  1. Install the required packages.

    • Red Hat family (RHEL, CentOS, Fedora, Rocky, Alma ):

        Check the available releases here https://download.ceph.com/ and look for recent rpm-* directories, quincy is the right/latest stable release at the time of this writing. The compatible distro are listed here https://download.ceph.com/rpm-quincy/, we will show the full installation for el8.
      1. Install relevant repositories for access to ceph client packages:
        File : /etc/yum.repos.d/ceph.repo

        [Ceph]
        name=Ceph packages for $basearch
        baseurl=http://download.ceph.com/rpm-quincy/el8/$basearch
        enabled=1
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asc
        
        [Ceph-noarch]
        name=Ceph noarch packages
        baseurl=http://download.ceph.com/rpm-quincy/el8/noarch
        enabled=1
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asc
        
        [ceph-source]
        name=Ceph source packages
        baseurl=http://download.ceph.com/rpm-quincy/el8/SRPMS
        enabled=1
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asc
        


        The epel repo also need to be in place

        sudo dnf install epel-release
        

        You can now install the ceph lib, cephfs client and other denpendencies:

        sudo dnf install -y libcephfs2 python3-cephfs ceph-common python3-ceph-argparse
        


        • Debian family (Debian, Ubuntu, Mint, etc.):

        You can get the repository one you have figured out your distro {codename} with lsb_release -sc

            sudo apt-add-repository 'deb https://download.ceph.com/debian-quincy/ {codename} main'
        
  2. Configure ceph client:

    First create a ceoh.conf file Note the different Mon host for the different cloud.

    File : /etc/ceph/ceph.conf

    [global]
    admin socket = /var/run/ceph/$cluster-$name-$pid.asok
    client reconnect stale = true
    debug client = 0/2
    fuse big writes = true
    mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789
    [client]
    quota = true
    


    File : /etc/ceph/ceph.conf

    [global]
    admin socket = /var/run/ceph/$cluster-$name-$pid.asok
    client reconnect stale = true
    debug client = 0/2
    fuse big writes = true
    mon host = 10.65.0.10:6789,10.65.0.12:6789,10.65.0.11:6789
    [client]
    quota = true
    


    Note that the monitors value differ from cluster to cluster. You can find the monitor information in the share details Path field that will be use to mount the volume.

  3. Retrieve the connection information from the share page for your connection:

    • Open up the share details by clicking the name of the share in the Shares page.
    • Copy the entire path of the share for mounting the filesystem.
  4. Mount the filesystem


    • Create a mount point directory somewhere in your host (/cephfs, is used here)
    • Via kernel mount using the ceph driver. You can do a permanent mount by adding the followin in the VM fstab
      File : /etc/fstab

      10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789:/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ ceph name=MyCephFS-RW 0  2


      File : /etc/fstab

      10.65.0.10:6789,10.65.0.12:6789,10.65.0.11:6789:/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ ceph name=MyCephFS-RW,mds_namespace=cephfs_4_2,x-systemd.device-timeout=30,x-systemd.mount-timeout=30,noatime,_netdev,rw 0  2


      Note that the mount options are different on different systems. The namespace option is requires for SD4H/Juno while other option are performance tweaks.

      • It can also be done from the command line:
      • sudo mount -t ceph 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789:/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ -o name=MyCephFS-RW

        sudo mount -t ceph 10.65.0.10:6789,10.65.0.12:6789,10.65.0.11:6789:/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ -o name=MyCephFS-RW,mds_namespace=cephfs_4_2,x-systemd.device-timeout=30,x-systemd.mount-timeout=30,noatime,_netdev,rw

      • Via ceph-fuse
        • Need to install ceph-fuse
        • Syntax: sudo ceph-fuse <mountPoint> --id=<shareKeyName> --conf=<pathtoCeph.conf> --keyring=<fullKeyringLocation> --client-mountpoint=pathFromShareDetails
          • e.g. sudo ceph-fuse /mnt/WebServerShare --id=def-project-shareName-read-write --conf=/etc/ceph/ceph.conf --keyring=/etc/ceph/client.fullkey.def-project-shareName-read-write --client-mountpoint=/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7

Notes

  • A particular share can have more than one user key provisioned for it.
    • This allows a more granular access to the filesystem, for example if you needed some hosts to only access the filesystem in a read-only capacity.
    • If you have multiple keys for a share, you can add the extra keys to your host and modify the above mounting procedure.
  • This service is not available to hosts outside of the OpenStack cluster.