CephFS: Difference between revisions
No edit summary |
No edit summary |
||
Line 37: | Line 37: | ||
#* ''Access To'' = select a key name that describes the key (e.g. ''def-project-shareName-read-write'') | #* ''Access To'' = select a key name that describes the key (e.g. ''def-project-shareName-read-write'') | ||
# Note the share details. | # Note the share details. | ||
#* In ''Project --> Share --> Shares, click on the name of the share. | #* In ''Project --> Share --> Shares'', click on the name of the share. | ||
#* In the ''Share Overview'' note the ''Path'' which you will need later. | #* In the ''Share Overview'', note the ''Path'' which you will need later. | ||
#* Under ''Access Rules'', note the ''Access Key'' which you will need later (the access key is approximately 40 characters and ends with the ''=''; sign; if you do not see an access key, you probably didn't add an access | #* Under ''Access Rules'', note the ''Access Key'' which you will need later (the access key is approximately 40 characters and ends with the ''=''; sign; if you do not see an access key, you probably didn't add an access rule of type ''cephx''. | ||
== Configure Host == <!--T:8--> | == Configure Host == <!--T:8--> | ||
Line 45: | Line 45: | ||
<!--T:9--> | <!--T:9--> | ||
<ol> | <ol> | ||
<li><p>Install the required packages</p> | <li><p>Install the required packages.</p> | ||
<ul> | <ul> | ||
<li><p>Red Hat | <li><p>Red Hat family (RHEL, CentOS, Fedora, Scientific Linux, SUSE, etc.):</p> | ||
<ol> | <ol> | ||
<li>Install relevant | <li>Install relevant repositories for access to ceph client packages: | ||
<pre>ceph-stable (nautilus is current as of this writting) | <pre>ceph-stable (nautilus is current as of this writting) | ||
https://docs.ceph.com/en/nautilus/install/get-packages/ | https://docs.ceph.com/en/nautilus/install/get-packages/ | ||
Line 56: | Line 56: | ||
<!--T:12--> | <!--T:12--> | ||
</pre></li> | </pre></li> | ||
<li>Install packages to enable the ceph client on all the | <li>Install packages to enable the ceph client on all the instances where you plan on mounting the share: | ||
<pre>libcephfs2 | <pre>libcephfs2 | ||
python-cephfs | python-cephfs | ||
Line 66: | Line 66: | ||
<!--T:13--> | <!--T:13--> | ||
<ul> | <ul> | ||
<li>Debian | <li>Debian family (Debian, Ubuntu, Mint, etc.):</li></ul> | ||
<!--T:14--> | <!--T:14--> | ||
Line 74: | Line 74: | ||
<li><p>Configure Keys:</p> | <li><p>Configure Keys:</p> | ||
<ul> | <ul> | ||
<li><p>Create two files in your | <li><p>Create two files in your instance, each containing the access key. This key can be found in the rule definition, or in the ''Access Rules'' section of your share definition.</p></li> | ||
<li><p>File 1: /etc/ceph/client.fullkey.shareName (e.g. client.fullkey.def-project-shareName-read-write)</p> | <li><p>File 1: /etc/ceph/client.fullkey.shareName (e.g. client.fullkey.def-project-shareName-read-write)</p> | ||
<ul> | <ul> |
Revision as of 17:39, 2 November 2022
CephFS provides a common filesystem that can be shared amongst multiple OpenStack VM hosts. Access to the service is granted via requests to cloud@tech.alliancecan.ca.
This is a fairly technical procedure that assumes basic Linux skills for creating/editing files, setting permissions, and creating mount points. For assistance in setting up this service, write to cloud@tech.alliancecan.ca.
Procedure
If you do not already have a quota for the service you will need to request this through cloud@tech.alliancecan.ca. In your request please provide the following:
- OpenStack project name
- amount of quota required in GB
- number of shares required
- Create the share.
- In Project --> Share --> Shares, click on +Create Share.
- Share Name = enter a name that identifies your project (e.g. project-name-shareName)
- Share Protocol = CephFS
- Size = size you need for this share
- Share Type = cephfs
- Availability Zone = nova
- Do not check Make visible for all, otherwise the share will be accessible by all users in all projects
- Create an access rule to generate an access key.
- In Project --> Share --> Shares --> Actions column, select Manage Rules from the dropdown menu.
- Click on the +Add Rule button (right of page).
- Access Type = cephx
- Access Level = select read-write or read-only (you can create multiple rules for either access level if required)
- Access To = select a key name that describes the key (e.g. def-project-shareName-read-write)
- Note the share details.
- In Project --> Share --> Shares, click on the name of the share.
- In the Share Overview, note the Path which you will need later.
- Under Access Rules, note the Access Key which you will need later (the access key is approximately 40 characters and ends with the =; sign; if you do not see an access key, you probably didn't add an access rule of type cephx.
Configure Host
Install the required packages.
Red Hat family (RHEL, CentOS, Fedora, Scientific Linux, SUSE, etc.):
- Install relevant repositories for access to ceph client packages:
ceph-stable (nautilus is current as of this writting) https://docs.ceph.com/en/nautilus/install/get-packages/ epel (sudo yum install epel-release)
- Install packages to enable the ceph client on all the instances where you plan on mounting the share:
libcephfs2 python-cephfs ceph-common python-ceph-argparse ceph-fuse (only if you intend a fuse mount)
- Debian family (Debian, Ubuntu, Mint, etc.):
https://docs.ceph.com/en/nautilus/install/get-packages/
- Install relevant repositories for access to ceph client packages:
Configure Keys:
Create two files in your instance, each containing the access key. This key can be found in the rule definition, or in the Access Rules section of your share definition.
File 1: /etc/ceph/client.fullkey.shareName (e.g. client.fullkey.def-project-shareName-read-write)
- contents:
[client.shareName] key = AccessKey
- contents:
File 2: /etc/ceph/client.keyonly.shareName (e.g client.keyonly.def-project-shareName-read-write)
- contents:
AccessKey
- This file only contains the Access Key
- contents:
Own these files correctly to protect the key information:
- Each file should be own to root
sudo chown root.root filename
- Each file should be only readable by root
sudo chmod 600 filename
Create
/etc/ceph/ceph.conf
with contents:[client] client quota = true mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789
- Note: these are the monitors for the Arbutus cluster - if connecting to a different cluster you will need the monitor information specific to that cluster.
- You can find the monitor information in the Share Details for your share in the "Path" field.
- Note: these are the monitors for the Arbutus cluster - if connecting to a different cluster you will need the monitor information specific to that cluster.
Retrieve the connection information from the share page for your connection:
- Open up the share details by clicking the name of the share in the Shares page.
- Copy the entire path of the share for mounting the filesystem.
Mount the filesystem
- Create mount point directory somewhere in your host (likely under /mnt/ - e.g. /mnt/ShareName)
- Via kernel mount using the ceph driver:
- Syntax:
sudo mount -t ceph <path information> <mountPoint> -o name=<shareKeyName>, secretfile=</path/to/keyringfileOnlyFile>
sudo mount -t ceph mon1:6789,mon2:6789,mon3:6789:/volumes/_nogroup/share_instance_id
- e.g
sudo mount -t ceph 192.168.17.13:6789,192.168.17.14:6789,192.168.17.15:6789:/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7 /mnt/WebServerShare -o name=def-project-shareName-read-write,secretfile=/etc/ceph/client.keyonly.def-project-sharename-read-write
- e.g
- Syntax:
- Via ceph-fuse
- Need to install ceph-fuse
- Syntax:
sudo ceph-fuse <mountPoint> --id=<shareKeyName> --conf=<pathtoCeph.conf> --keyring=<fullKeyringLocation> --client-mountpoint=pathFromShareDetails
- e.g.
sudo ceph-fuse /mnt/WebServerShare --id=def-project-shareName-read-write --conf=/etc/ceph/ceph.conf --keyring=/etc/ceph/client.fullkey.def-project-shareName-read-write --client-mountpoint=/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7
- e.g.
Notes
- A particular share can have more than one user key provisioned for it.
- This allows a more granular access to the filesystem.
- For example, if you needed some hosts to only access the filesystem in a read only capacity.
- If you have multiple keys for a share you can add the extra keys to your host and modify the above mounting procedure.
- This service is not available to hosts outside of the Openstack cluster.