CephFS: Difference between revisions
m (Poq moved page Arbutus CephFS to CephFS without leaving a redirect: Part of translatable page "Arbutus CephFS") |
No edit summary |
||
Line 19: | Line 19: | ||
* number of shares required | * number of shares required | ||
== Create share == <!--T:6--> | == Openstack Configuration: Create CephFS share == <!--T:6--> | ||
<!--T:7--> | <!--T:7--> | ||
# Create the share. | # Create the share. | ||
[[File:Cephfs config.png|thumb|Configuration of CephFS on Horizon Gui]] | |||
#* In <i>Project --> Share --> Shares</i>, click on <i>+Create Share</i>. | #* In <i>Project --> Share --> Shares</i>, click on <i>+Create Share</i>. | ||
#* <i>Share Name</i> = enter a name that identifies your project (e.g. <i>project-name-shareName</i>) | #* <i>Share Name</i> = enter a name that identifies your project (e.g. <i>project-name-shareName</i>) | ||
Line 36: | Line 37: | ||
#* <i>Access Type</i> = cephx | #* <i>Access Type</i> = cephx | ||
#* <i>Access Level</i> = select <i>read-write</i> or <i>read-only</i> (you can create multiple rules for either access level if required) | #* <i>Access Level</i> = select <i>read-write</i> or <i>read-only</i> (you can create multiple rules for either access level if required) | ||
#* <i>Access To</i> = select a key name that describes the key (e.g. <i> | #* <i>Access To</i> = select a key name that describes the key (e.g. <i>MyCephFS-RW</i>) | ||
# Note the share details which you will need later. | # Note the share details which you will need later. | ||
#* In <i>Project --> Share --> Shares</i>, click on the name of the share. | #* In <i>Project --> Share --> Shares</i>, click on the name of the share. | ||
Line 42: | Line 43: | ||
#* Under <i>Access Rules</i>, note the <i>Access Key</i> (the access key is approximately 40 characters and ends with the <i>=</i> sign; if you do not see an access key, you probably didn't add an access rule of type <i>cephx</i>. | #* Under <i>Access Rules</i>, note the <i>Access Key</i> (the access key is approximately 40 characters and ends with the <i>=</i> sign; if you do not see an access key, you probably didn't add an access rule of type <i>cephx</i>. | ||
== | == VM configuration: install and configure CephFS client == <!--T:8--> | ||
<ol> | <ol> | ||
<li><p>Install the required packages.</p> | <li><p>Install the required packages.</p> | ||
<ul> | <ul> | ||
<li><p>Red Hat family (RHEL, CentOS, Fedora, | <li><p>Red Hat family (RHEL, CentOS, Fedora, Rocky, Alma ):</p> | ||
<ol> | <ol> | ||
Check the available releases | |||
`https://download.ceph.com/rpm-*` | |||
quincy is the right one at this time, you can also look for compatible distro here | |||
`https://download.ceph.com/rpm-quincy/`, we will show the full installation for <code>el8</code> | |||
<li>Install relevant repositories for access to ceph client packages: | <li>Install relevant repositories for access to ceph client packages: | ||
{{File | |||
|name=/etc/yum.repos.d/ceph.repo | |||
|lang="ini" | |||
|contents= | |||
python3-cephfs | [Ceph] | ||
ceph-common | name=Ceph packages for $basearch | ||
python3-ceph-argparse | baseurl=http://download.ceph.com/rpm-quincy/el8/$basearch | ||
enabled=1 | |||
gpgcheck=1 | |||
type=rpm-md | |||
gpgkey=https://download.ceph.com/keys/release.asc | |||
[Ceph-noarch] | |||
name=Ceph noarch packages | |||
baseurl=http://download.ceph.com/rpm-quincy/el8/noarch | |||
enabled=1 | |||
gpgcheck=1 | |||
type=rpm-md | |||
gpgkey=https://download.ceph.com/keys/release.asc | |||
[ceph-source] | |||
name=Ceph source packages | |||
baseurl=http://download.ceph.com/rpm-quincy/el8/SRPMS | |||
enabled=1 | |||
gpgcheck=1 | |||
type=rpm-md | |||
gpgkey=https://download.ceph.com/keys/release.asc | |||
}} | |||
You can now install client and its denpendencies: | |||
dnf install -y libcephfs2 python3-cephfs ceph-common python3-ceph-argparse | |||
<ul> | <ul> | ||
<li><p>Debian family (Debian, Ubuntu, Mint, etc.):</p></li></ul> | <li><p>Debian family (Debian, Ubuntu, Mint, etc.):</p></li></ul> | ||
You can get the repository one you have figured out your distro <code>{codename}</code> with <code>lsb_release -sc</code> | |||
<pre> sudo apt-add-repository 'deb https://download.ceph.com/debian-quincy/ {codename} main' | |||
<pre> https:// | |||
</pre></li></ul> | </pre></li></ul> | ||
</li> | </li> | ||
<li><p> | <li><p>Configure ceph client:</p> | ||
First create a ceoh.conf file Note the different Mon host for the different cloud. | |||
< | <tabs> | ||
< | <tab name="Arbutus"> | ||
{{File | |||
|name=/etc/ceph/ceph.conf | |||
|lang="ini" | |||
|contents= | |||
[client] | |||
client quota = true | client quota = true | ||
mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789 | mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789 | ||
}} | |||
</ | </tab> | ||
< | <tab name="SD4H/Juno"> | ||
{{File | |||
|name=/etc/ceph/ceph.conf | |||
|lang="ini" | |||
|contents= | |||
[global] | |||
admin socket = /var/run/ceph/$cluster-$name-$pid.asok | |||
client reconnect stale = true | |||
debug client = 0/2 | |||
fuse big writes = true | |||
mon host = 10.65.0.10:6789,10.65.0.12:6789,10.65.0.11:6789 | |||
[client] | |||
quota = true | |||
}} | |||
</tab> | |||
</tabs> | |||
<li>Note: these are the monitors for the Arbutus cluster. If connecting to a different cluster, you will need the monitor information specific to that cluster. | <li>Note: these are the monitors for the Arbutus cluster. If connecting to a different cluster, you will need the monitor information specific to that cluster. | ||
<ul> | <ul> |
Revision as of 04:13, 29 February 2024
CephFS provides a common filesystem that can be shared amongst multiple OpenStack VM hosts. Access to the service is granted via requests to cloud@tech.alliancecan.ca.
This is a fairly technical procedure that assumes basic Linux skills for creating/editing files, setting permissions, and creating mount points. For assistance in setting up this service, write to cloud@tech.alliancecan.ca.
Procedure
If you do not already have a quota for the service, you will need to request this through cloud@tech.alliancecan.ca. In your request please provide the following:
- OpenStack project name
- amount of quota required in GB
- number of shares required
- Create the share.
- In Project --> Share --> Shares, click on +Create Share.
- Share Name = enter a name that identifies your project (e.g. project-name-shareName)
- Share Protocol = CephFS
- Size = size you need for this share
- Share Type = cephfs
- Availability Zone = nova
- Do not check Make visible for all, otherwise the share will be accessible by all users in all projects.
- Click on the Create button.
- Create an access rule to generate an access key.
- In Project --> Share --> Shares --> Actions column, select Manage Rules from the drop-down menu.
- Click on the +Add Rule button (right of page).
- Access Type = cephx
- Access Level = select read-write or read-only (you can create multiple rules for either access level if required)
- Access To = select a key name that describes the key (e.g. MyCephFS-RW)
- Note the share details which you will need later.
- In Project --> Share --> Shares, click on the name of the share.
- In the Share Overview, note the Path.
- Under Access Rules, note the Access Key (the access key is approximately 40 characters and ends with the = sign; if you do not see an access key, you probably didn't add an access rule of type cephx.
VM configuration: install and configure CephFS client
Install the required packages.
Red Hat family (RHEL, CentOS, Fedora, Rocky, Alma ):
-
Check the available releases
`https://download.ceph.com/rpm-*`
quincy is the right one at this time, you can also look for compatible distro here
`https://download.ceph.com/rpm-quincy/`, we will show the full installation for
- Install relevant repositories for access to ceph client packages:
File : /etc/yum.repos.d/ceph.repo
[Ceph] name=Ceph packages for $basearch baseurl=http://download.ceph.com/rpm-quincy/el8/$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc [Ceph-noarch] name=Ceph noarch packages baseurl=http://download.ceph.com/rpm-quincy/el8/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc [ceph-source] name=Ceph source packages baseurl=http://download.ceph.com/rpm-quincy/el8/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc
You can now install client and its denpendencies:
dnf install -y libcephfs2 python3-cephfs ceph-common python3-ceph-argparse
Debian family (Debian, Ubuntu, Mint, etc.):
You can get the repository one you have figured out your distro
{codename}
withlsb_release -sc
sudo apt-add-repository 'deb https://download.ceph.com/debian-quincy/ {codename} main'
el8
- Install relevant repositories for access to ceph client packages:
Configure ceph client:
First create a ceoh.conf file Note the different Mon host for the different cloud.
File : /etc/ceph/ceph.conf[client] client quota = true mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789
File : /etc/ceph/ceph.conf[global] admin socket = /var/run/ceph/$cluster-$name-$pid.asok client reconnect stale = true debug client = 0/2 fuse big writes = true mon host = 10.65.0.10:6789,10.65.0.12:6789,10.65.0.11:6789 [client] quota = true
- Note: these are the monitors for the Arbutus cluster. If connecting to a different cluster, you will need the monitor information specific to that cluster.
- You can find the monitor information in the share details for your share in the Path field.
Retrieve the connection information from the share page for your connection:
- Open up the share details by clicking the name of the share in the Shares page.
- Copy the entire path of the share for mounting the filesystem.
Mount the filesystem
- Create a mount point directory somewhere in your host (likely under /mnt/ - e.g. /mnt/ShareName)
- Via kernel mount using the ceph driver:
- Syntax:
sudo mount -t ceph <path information> <mountPoint> -o name=<shareKeyName>, secretfile=</path/to/keyringfileOnlyFile>
sudo mount -t ceph mon1:6789,mon2:6789,mon3:6789:/volumes/_nogroup/share_instance_id
- e.g
sudo mount -t ceph 192.168.17.13:6789,192.168.17.14:6789,192.168.17.15:6789:/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7 /mnt/WebServerShare -o name=def-project-shareName-read-write,secretfile=/etc/ceph/client.keyonly.def-project-sharename-read-write
- e.g
- Syntax:
- Via ceph-fuse
- Need to install ceph-fuse
- Syntax:
sudo ceph-fuse <mountPoint> --id=<shareKeyName> --conf=<pathtoCeph.conf> --keyring=<fullKeyringLocation> --client-mountpoint=pathFromShareDetails
- e.g.
sudo ceph-fuse /mnt/WebServerShare --id=def-project-shareName-read-write --conf=/etc/ceph/ceph.conf --keyring=/etc/ceph/client.fullkey.def-project-shareName-read-write --client-mountpoint=/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7
- e.g.
Notes
- A particular share can have more than one user key provisioned for it.
- This allows a more granular access to the filesystem, for example if you needed some hosts to only access the filesystem in a read-only capacity.
- If you have multiple keys for a share, you can add the extra keys to your host and modify the above mounting procedure.
- This service is not available to hosts outside of the OpenStack cluster.