CephFS
CephFS provides a common filesystem that can be shared amongst multiple OpenStack VM hosts. Access to the service is granted via requests to cloud@tech.alliancecan.ca.
This is a fairly technical procedure that assumes basic Linux skills for creating/editing files, setting permissions, and creating mount points. For assistance in setting up this service, write to cloud@tech.alliancecan.ca.
Procedure
If you do not already have a quota for the service, you will need to request this through cloud@tech.alliancecan.ca. In your request please provide the following:
- OpenStack project name
- amount of quota required in GB
- number of shares required
- Create the share.
- In Project --> Share --> Shares, click on +Create Share.
- Share Name = enter a name that identifies your project (e.g. project-name-shareName)
- Share Protocol = CephFS
- Size = size you need for this share
- Share Type = cephfs
- Availability Zone = nova
- Do not check Make visible for all, otherwise the share will be accessible by all users in all projects.
- Click on the Create button.
- Create an access rule to generate an access key.
- In Project --> Share --> Shares --> Actions column, select Manage Rules from the drop-down menu.
- Click on the +Add Rule button (right of page).
- Access Type = cephx
- Access Level = select read-write or read-only (you can create multiple rules for either access level if required)
- Access To = select a key name that describes the key, this name is important, it will be used in the cephfs clien config on the VM, we will use MyCephFS-RW on this page.
- Note the share details which you will need later.
- In Project --> Share --> Shares, click on the name of the share.
- In the Share Overview, note the three element circled in red in the "Properly configured" image: Path, wich will be used in the mount command on the VM, the Access Rules, which will be the client name and the Access Key that will let the VM's client connnect.
VM configuration: install and configure CephFS client
Install the required packages.
Red Hat family (RHEL, CentOS, Fedora, Rocky, Alma ):
-
Check the available releases here https://download.ceph.com/ and look for recent
- Install relevant repositories for access to ceph client packages:
File : /etc/yum.repos.d/ceph.repo
[Ceph] name=Ceph packages for $basearch baseurl=http://download.ceph.com/rpm-quincy/el8/$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc [Ceph-noarch] name=Ceph noarch packages baseurl=http://download.ceph.com/rpm-quincy/el8/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc [ceph-source] name=Ceph source packages baseurl=http://download.ceph.com/rpm-quincy/el8/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc
The epel repo also need to be in place
sudo dnf install epel-release
You can now install the ceph lib, cephfs client and other denpendencies:
sudo dnf install -y libcephfs2 python3-cephfs ceph-common python3-ceph-argparse
Debian family (Debian, Ubuntu, Mint, etc.):
You can get the repository one you have figured out your distro
{codename}
withlsb_release -sc
sudo apt-add-repository 'deb https://download.ceph.com/debian-quincy/ {codename} main'
rpm-*
directories, quincy is the right/latest stable release at the time of this writing. The compatible distro are listed here https://download.ceph.com/rpm-quincy/, we will show the full installation forel8
.- Install relevant repositories for access to ceph client packages:
Configure ceph client:
First create a ceoh.conf file Note the different Mon host for the different cloud.
File : /etc/ceph/ceph.conf[global] admin socket = /var/run/ceph/$cluster-$name-$pid.asok client reconnect stale = true debug client = 0/2 fuse big writes = true mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789 [client] quota = true
File : /etc/ceph/ceph.conf[global] admin socket = /var/run/ceph/$cluster-$name-$pid.asok client reconnect stale = true debug client = 0/2 fuse big writes = true mon host = 10.65.0.10:6789,10.65.0.12:6789,10.65.0.11:6789 [client] quota = true
- Note: these are the monitors for the Arbutus cluster. If connecting to a different cluster, you will need the monitor information specific to that cluster.
- You can find the monitor information in the share details for your share in the Path field.
Retrieve the connection information from the share page for your connection:
- Open up the share details by clicking the name of the share in the Shares page.
- Copy the entire path of the share for mounting the filesystem.
Mount the filesystem
- Create a mount point directory somewhere in your host (likely under /mnt/ - e.g. /mnt/ShareName)
- Via kernel mount using the ceph driver:
- Syntax:
sudo mount -t ceph <path information> <mountPoint> -o name=<shareKeyName>, secretfile=</path/to/keyringfileOnlyFile>
sudo mount -t ceph mon1:6789,mon2:6789,mon3:6789:/volumes/_nogroup/share_instance_id
- e.g
sudo mount -t ceph 192.168.17.13:6789,192.168.17.14:6789,192.168.17.15:6789:/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7 /mnt/WebServerShare -o name=def-project-shareName-read-write,secretfile=/etc/ceph/client.keyonly.def-project-sharename-read-write
- e.g
- Syntax:
- Via ceph-fuse
- Need to install ceph-fuse
- Syntax:
sudo ceph-fuse <mountPoint> --id=<shareKeyName> --conf=<pathtoCeph.conf> --keyring=<fullKeyringLocation> --client-mountpoint=pathFromShareDetails
- e.g.
sudo ceph-fuse /mnt/WebServerShare --id=def-project-shareName-read-write --conf=/etc/ceph/ceph.conf --keyring=/etc/ceph/client.fullkey.def-project-shareName-read-write --client-mountpoint=/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7
- e.g.
Notes
- A particular share can have more than one user key provisioned for it.
- This allows a more granular access to the filesystem, for example if you needed some hosts to only access the filesystem in a read-only capacity.
- If you have multiple keys for a share, you can add the extra keys to your host and modify the above mounting procedure.
- This service is not available to hosts outside of the OpenStack cluster.