CephFS: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
Line 156: Line 156:
</li>
</li>
<li><p>Mount the filesystem</p>
<li><p>Mount the filesystem</p>
<ul>
<ul>
<li>Create a mount point directory somewhere in your host (<code>/cephfs</code>, is used here)</li>
<li>Create a mount point directory somewhere in your host (<code>/cephfs</code>, is used here)</li>
<source lang="bash">
  mkdir /cephfs
  mkdir /cephfs
</source>
<li>Via kernel mount using the ceph driver. You can do a permanent mount by adding the followin in the VM fstab
<li>Via kernel mount using the ceph driver. You can do a permanent mount by adding the followin in the VM fstab
<tabs>
<tabs>
Line 168: Line 168:
   |lang="txt"
   |lang="txt"
   |contents=
   |contents=
10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789:/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ ceph name=MyCephFS-RW 0  2
:/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ ceph name=MyCephFS-RW 0  2
}}
}}
</tab>
</tab>
Line 176: Line 176:
   |lang="txt"
   |lang="txt"
   |contents=
   |contents=
10.65.0.10:6789,10.65.0.12:6789,10.65.0.11:6789:/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ ceph name=MyCephFS-RW,mds_namespace=cephfs_4_2,x-systemd.device-timeout=30,x-systemd.mount-timeout=30,noatime,_netdev,rw 0  2
:/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ ceph name=MyCephFS-RW,mds_namespace=cephfs_4_2,x-systemd.device-timeout=30,x-systemd.mount-timeout=30,noatime,_netdev,rw 0  2
}}
}}
</tab>
</tab>
</tabs>
</tabs>


Note that the mount options are different on different systems. The namespace option is requires for SD4H/Juno while other option are performance tweaks.
Note  
<li> There is a non-standar/funky <code>:</code> before the device path, it is not a typo!
The mount options are different on different systems.  
The namespace option is requires for SD4H/Juno while other option are performance tweaks.


<li>It can also be done from the command line:</li>  
<li>It can also be done from the command line:</li>  
Line 187: Line 190:
<tab name="Arbutus">
<tab name="Arbutus">
<code>
<code>
sudo mount -t ceph 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789:/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ -o name=MyCephFS-RW
sudo mount -t ceph :/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ -o name=MyCephFS-RW
</code>
</code>
</tab>
</tab>
<tab name="SD4H/Juno">
<tab name="SD4H/Juno">
<code>
<code>
sudo mount -t ceph 10.65.0.10:6789,10.65.0.12:6789,10.65.0.11:6789:/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ -o name=MyCephFS-RW,mds_namespace=cephfs_4_2,x-systemd.device-timeout=30,x-systemd.mount-timeout=30,noatime,_netdev,rw
sudo mount -t ceph :/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ -o name=MyCephFS-RW,mds_namespace=cephfs_4_2,x-systemd.device-timeout=30,x-systemd.mount-timeout=30,noatime,_netdev,rw
</code>
</code>
</tab>
</tab>
</tabs>
</tabs>


<li>Via ceph-fuse
<li>Or via ceph-fuse if the file system needs to be mounted in user space
<ul>
 
<li>Need to install ceph-fuse</li>
<li>Install ceph-fuse</li>
<li>Syntax: <code>sudo ceph-fuse &lt;mountPoint&gt; --id=&lt;shareKeyName&gt; --conf=&lt;pathtoCeph.conf&gt; --keyring=&lt;fullKeyringLocation&gt; --client-mountpoint=pathFromShareDetails</code>
 
<ul>
<source lang="bash">
<li>e.g. <code>sudo ceph-fuse /mnt/WebServerShare --id=def-project-shareName-read-write --conf=/etc/ceph/ceph.conf --keyring=/etc/ceph/client.fullkey.def-project-shareName-read-write --client-mountpoint=/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7</code></li></ul>
sudo dnf install ceph-fuse
</source>
<li>Let the fuse mount be accessible in userspace by uncommenting  <code>user_allow_other</code> in the <code>fuse.conf</code> file
 
{{File
  |name=/etc/fstab
  |lang="txt"
  |contents=
# mount_max = 1000
user_allow_other
}}
 
<li> You can now mount cephFS in a user home:
<source lang="bash">
mkdir ~/my_cephfs
ceph-fuse my_cephfs/ --id=MyCephFS-RW --conf=~/ceph.conf --keyring=~/ceph.keyring  --client-mountpoint=/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c
</source>
Note that the client name is here the <code>--id</code> and that the <code>ceph.conf</code> and <code>ceph.keyring</code>  
 
</li></ul>
</li></ul>
</li></ul>
</li></ul>

Revision as of 20:37, 29 February 2024

Other languages:


CephFS provides a common filesystem that can be shared amongst multiple OpenStack VM hosts. Access to the service is granted via requests to cloud@tech.alliancecan.ca.

This is a fairly technical procedure that assumes basic Linux skills for creating/editing files, setting permissions, and creating mount points. For assistance in setting up this service, write to cloud@tech.alliancecan.ca.

Procedure

Request access to shares

If you do not already have a quota for the service, you will need to request this through cloud@tech.alliancecan.ca. In your request please provide the following:

  • OpenStack project name
  • amount of quota required in GB
  • number of shares required

Openstack Configuration: Create CephFS share

Configuration of CephFS on Horizon Gui
Porperly configured CephFS
  1. Create the share.
    • In Project --> Share --> Shares, click on +Create Share.
    • Share Name = enter a name that identifies your project (e.g. project-name-shareName)
    • Share Protocol = CephFS
    • Size = size you need for this share
    • Share Type = cephfs
    • Availability Zone = nova
    • Do not check Make visible for all, otherwise the share will be accessible by all users in all projects.
    • Click on the Create button.
  2. Create an access rule to generate an access key.
    • In Project --> Share --> Shares --> Actions column, select Manage Rules from the drop-down menu.
    • Click on the +Add Rule button (right of page).
    • Access Type = cephx
    • Access Level = select read-write or read-only (you can create multiple rules for either access level if required)
    • Access To = select a key name that describes the key, this name is important, it will be used in the cephfs clien config on the VM, we will use MyCephFS-RW on this page.
  3. Note the share details which you will need later.
    • In Project --> Share --> Shares, click on the name of the share.
    • In the Share Overview, note the three element circled in red in the "Properly configured" image: Path, wich will be used in the mount command on the VM, the Access Rules, which will be the client name and the Access Key that will let the VM's client connnect.

VM configuration: install and configure CephFS client

  1. Install the required packages.

    • Red Hat family (RHEL, CentOS, Fedora, Rocky, Alma ):

        Check the available releases here https://download.ceph.com/ and look for recent rpm-* directories, quincy is the right/latest stable release at the time of this writing. The compatible distro are listed here https://download.ceph.com/rpm-quincy/, we will show the full installation for el8.
      1. Install relevant repositories for access to ceph client packages:
        File : /etc/yum.repos.d/ceph.repo

        [Ceph]
        name=Ceph packages for $basearch
        baseurl=http://download.ceph.com/rpm-quincy/el8/$basearch
        enabled=1
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asc
        
        [Ceph-noarch]
        name=Ceph noarch packages
        baseurl=http://download.ceph.com/rpm-quincy/el8/noarch
        enabled=1
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asc
        
        [ceph-source]
        name=Ceph source packages
        baseurl=http://download.ceph.com/rpm-quincy/el8/SRPMS
        enabled=1
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asc
        


        The epel repo also need to be in place

        sudo dnf install epel-release
        

        You can now install the ceph lib, cephfs client and other denpendencies:

        sudo dnf install -y libcephfs2 python3-cephfs ceph-common python3-ceph-argparse
        


        • Debian family (Debian, Ubuntu, Mint, etc.):

        You can get the repository one you have figured out your distro {codename} with lsb_release -sc

            sudo apt-add-repository 'deb https://download.ceph.com/debian-quincy/ {codename} main'
        
  2. Configure ceph client:

    Once the client is install, you can create a ceph.conf file, note the different Mon host for the different cloud.

    File : /etc/ceph/ceph.conf

    [global]
    admin socket = /var/run/ceph/$cluster-$name-$pid.asok
    client reconnect stale = true
    debug client = 0/2
    fuse big writes = true
    mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789
    [client]
    quota = true
    


    File : /etc/ceph/ceph.conf

    [global]
    admin socket = /var/run/ceph/$cluster-$name-$pid.asok
    client reconnect stale = true
    debug client = 0/2
    fuse big writes = true
    mon host = 10.65.0.10:6789,10.65.0.12:6789,10.65.0.11:6789
    [client]
    quota = true
    


    You can find the monitor information in the share details Path field that will be use to mount the volume. If the value from the web page is different than what is seen here, it means that the wiki page is out of date.

    You aslo need to put your cient name and secret in the ceph.keyring file


    File : /etc/ceph/ceph.keyring

    [client.MyCephFS-RW]
        key = <access Key>
    


    Again, the acces key and client name (here MyCephFS-RW) are found under access rules on your project web page, hereL Project --> Share --> Shares, click on the name of the share.


  3. Retrieve the connection information from the share page for your connection:

    • Open up the share details by clicking the name of the share in the Shares page.
    • Copy the entire path of the share for mounting the filesystem.
  4. Mount the filesystem

    • Create a mount point directory somewhere in your host (/cephfs, is used here)
    •  mkdir /cephfs
      
    • Via kernel mount using the ceph driver. You can do a permanent mount by adding the followin in the VM fstab
      File : /etc/fstab

      :/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ ceph name=MyCephFS-RW 0  2


      File : /etc/fstab

      :/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ ceph name=MyCephFS-RW,mds_namespace=cephfs_4_2,x-systemd.device-timeout=30,x-systemd.mount-timeout=30,noatime,_netdev,rw 0  2


      Note

    • There is a non-standar/funky : before the device path, it is not a typo! The mount options are different on different systems. The namespace option is requires for SD4H/Juno while other option are performance tweaks.
    • It can also be done from the command line:
    • sudo mount -t ceph :/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ -o name=MyCephFS-RW

      sudo mount -t ceph :/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c /cephfs/ -o name=MyCephFS-RW,mds_namespace=cephfs_4_2,x-systemd.device-timeout=30,x-systemd.mount-timeout=30,noatime,_netdev,rw

    • Or via ceph-fuse if the file system needs to be mounted in user space
    • Install ceph-fuse
    • sudo dnf install ceph-fuse
      
    • Let the fuse mount be accessible in userspace by uncommenting user_allow_other in the fuse.conf file
      File : /etc/fstab

      # mount_max = 1000
      user_allow_other


    • You can now mount cephFS in a user home:
      mkdir ~/my_cephfs
      ceph-fuse my_cephfs/ --id=MyCephFS-RW --conf=~/ceph.conf --keyring=~/ceph.keyring   --client-mountpoint=/volumes/_nogroup/f6cb8f06-f0a4-4b88-b261-f8bd6b03582c
      

      Note that the client name is here the --id and that the ceph.conf and ceph.keyring

Notes

  • A particular share can have more than one user key provisioned for it.
    • This allows a more granular access to the filesystem, for example if you needed some hosts to only access the filesystem in a read-only capacity.
    • If you have multiple keys for a share, you can add the extra keys to your host and modify the above mounting procedure.
  • This service is not available to hosts outside of the OpenStack cluster.