CephFS

From Alliance Doc
Revision as of 13:50, 29 February 2024 by Poq (talk | contribs)
Jump to navigation Jump to search
Other languages:


CephFS provides a common filesystem that can be shared amongst multiple OpenStack VM hosts. Access to the service is granted via requests to cloud@tech.alliancecan.ca.

This is a fairly technical procedure that assumes basic Linux skills for creating/editing files, setting permissions, and creating mount points. For assistance in setting up this service, write to cloud@tech.alliancecan.ca.

Procedure

Request access to shares

If you do not already have a quota for the service, you will need to request this through cloud@tech.alliancecan.ca. In your request please provide the following:

  • OpenStack project name
  • amount of quota required in GB
  • number of shares required

Openstack Configuration: Create CephFS share

  1. Create the share.
Configuration of CephFS on Horizon Gui
    • In Project --> Share --> Shares, click on +Create Share.
    • Share Name = enter a name that identifies your project (e.g. project-name-shareName)
    • Share Protocol = CephFS
    • Size = size you need for this share
    • Share Type = cephfs
    • Availability Zone = nova
    • Do not check Make visible for all, otherwise the share will be accessible by all users in all projects.
    • Click on the Create button.
  1. Create an access rule to generate an access key.
    • In Project --> Share --> Shares --> Actions column, select Manage Rules from the drop-down menu.
    • Click on the +Add Rule button (right of page).
    • Access Type = cephx
    • Access Level = select read-write or read-only (you can create multiple rules for either access level if required)
    • Access To = select a key name that describes the key (e.g. MyCephFS-RW)
  2. Note the share details which you will need later.
    • In Project --> Share --> Shares, click on the name of the share.
    • In the Share Overview, note the Path.
    • Under Access Rules, note the Access Key (the access key is approximately 40 characters and ends with the = sign; if you do not see an access key, you probably didn't add an access rule of type cephx.

VM configuration: install and configure CephFS client

  1. Install the required packages.

    • Red Hat family (RHEL, CentOS, Fedora, Rocky, Alma ):

        Check the available releases here `https://download.ceph.com/` and look for recent rpm-* directories, quincy is the right/latest stable release at the time of this writing. The compatible distro are listed here `https://download.ceph.com/rpm-quincy/`, we will show the full installation for el8.
      1. Install relevant repositories for access to ceph client packages:
        File : /etc/yum.repos.d/ceph.repo

        [Ceph]
        name=Ceph packages for $basearch
        baseurl=http://download.ceph.com/rpm-quincy/el8/$basearch
        enabled=1
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asc
        
        [Ceph-noarch]
        name=Ceph noarch packages
        baseurl=http://download.ceph.com/rpm-quincy/el8/noarch
        enabled=1
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asc
        
        [ceph-source]
        name=Ceph source packages
        baseurl=http://download.ceph.com/rpm-quincy/el8/SRPMS
        enabled=1
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asc
        


        The epel repo also need to be in place

        sudo dnf install epel-release
        

        You can now install the ceph lib, cephfs client and other denpendencies:

        sudo dnf install -y libcephfs2 python3-cephfs ceph-common python3-ceph-argparse
        


        • Debian family (Debian, Ubuntu, Mint, etc.):

        You can get the repository one you have figured out your distro {codename} with lsb_release -sc

            sudo apt-add-repository 'deb https://download.ceph.com/debian-quincy/ {codename} main'
        
  2. Configure ceph client:

    First create a ceoh.conf file Note the different Mon host for the different cloud.

    File : /etc/ceph/ceph.conf

    [client]
        client quota = true
        mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789
    


    File : /etc/ceph/ceph.conf

    [global]
    admin socket = /var/run/ceph/$cluster-$name-$pid.asok
    client reconnect stale = true
    debug client = 0/2
    fuse big writes = true
    mon host = 10.65.0.10:6789,10.65.0.12:6789,10.65.0.11:6789
    [client]
    quota = true
    



  3. Note: these are the monitors for the Arbutus cluster. If connecting to a different cluster, you will need the monitor information specific to that cluster.
    • You can find the monitor information in the share details for your share in the Path field.
  4. Retrieve the connection information from the share page for your connection:

    • Open up the share details by clicking the name of the share in the Shares page.
    • Copy the entire path of the share for mounting the filesystem.
  5. Mount the filesystem

    • Create a mount point directory somewhere in your host (likely under /mnt/ - e.g. /mnt/ShareName)
    • Via kernel mount using the ceph driver:
      • Syntax: sudo mount -t ceph <path information> <mountPoint> -o name=<shareKeyName>, secretfile=</path/to/keyringfileOnlyFile>
      • sudo mount -t ceph mon1:6789,mon2:6789,mon3:6789:/volumes/_nogroup/share_instance_id
        • e.g sudo mount -t ceph 192.168.17.13:6789,192.168.17.14:6789,192.168.17.15:6789:/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7 /mnt/WebServerShare -o name=def-project-shareName-read-write,secretfile=/etc/ceph/client.keyonly.def-project-sharename-read-write
    • Via ceph-fuse
      • Need to install ceph-fuse
      • Syntax: sudo ceph-fuse <mountPoint> --id=<shareKeyName> --conf=<pathtoCeph.conf> --keyring=<fullKeyringLocation> --client-mountpoint=pathFromShareDetails
        • e.g. sudo ceph-fuse /mnt/WebServerShare --id=def-project-shareName-read-write --conf=/etc/ceph/ceph.conf --keyring=/etc/ceph/client.fullkey.def-project-shareName-read-write --client-mountpoint=/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7

Notes

  • A particular share can have more than one user key provisioned for it.
    • This allows a more granular access to the filesystem, for example if you needed some hosts to only access the filesystem in a read-only capacity.
    • If you have multiple keys for a share, you can add the extra keys to your host and modify the above mounting procedure.
  • This service is not available to hosts outside of the OpenStack cluster.