CephFS: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
m (Poq moved page Arbutus CephFS to CephFS without leaving a redirect: Part of translatable page "Arbutus CephFS")
No edit summary
Line 19: Line 19:
* number of shares required
* number of shares required


== Create share == <!--T:6-->
== Openstack Configuration: Create CephFS share == <!--T:6-->


<!--T:7-->
<!--T:7-->
# Create the share.
# Create the share.
[[File:Cephfs config.png|thumb|Configuration of CephFS on Horizon Gui]]
#* In <i>Project --> Share --> Shares</i>, click on <i>+Create Share</i>.
#* In <i>Project --> Share --> Shares</i>, click on <i>+Create Share</i>.
#* <i>Share Name</i> = enter a name that identifies your project (e.g. <i>project-name-shareName</i>)
#* <i>Share Name</i> = enter a name that identifies your project (e.g. <i>project-name-shareName</i>)
Line 36: Line 37:
#* <i>Access Type</i> = cephx
#* <i>Access Type</i> = cephx
#* <i>Access Level</i> = select <i>read-write</i> or <i>read-only</i> (you can create multiple rules for either access level if required)
#* <i>Access Level</i> = select <i>read-write</i> or <i>read-only</i> (you can create multiple rules for either access level if required)
#* <i>Access To</i> = select a key name that describes the key (e.g. <i>def-project-shareName-read-write</i>)
#* <i>Access To</i> = select a key name that describes the key (e.g. <i>MyCephFS-RW</i>)
# Note the share details which you will need later.
# Note the share details which you will need later.
#* In <i>Project --> Share --> Shares</i>, click on the name of the share.
#* In <i>Project --> Share --> Shares</i>, click on the name of the share.
Line 42: Line 43:
#* Under <i>Access Rules</i>, note the <i>Access Key</i> (the access key is approximately 40 characters and ends with the <i>=</i> sign; if you do not see an access key, you probably didn't add an access rule of type <i>cephx</i>.
#* Under <i>Access Rules</i>, note the <i>Access Key</i> (the access key is approximately 40 characters and ends with the <i>=</i> sign; if you do not see an access key, you probably didn't add an access rule of type <i>cephx</i>.


== Configure host == <!--T:8-->
== VM configuration: install and configure CephFS client == <!--T:8-->


<!--T:9-->
<ol>
<ol>
<li><p>Install the required packages.</p>
<li><p>Install the required packages.</p>
<ul>
<ul>
<li><p>Red Hat family (RHEL, CentOS, Fedora, Scientific Linux, SUSE, etc.):</p>
<li><p>Red Hat family (RHEL, CentOS, Fedora, Rocky, Alma ):</p>
<ol>
<ol>
Check the available releases
`https://download.ceph.com/rpm-*`
quincy is the right one at this time, you can also look for compatible distro here
`https://download.ceph.com/rpm-quincy/`, we will show the full installation for <code>el8</code>
<li>Install relevant repositories for access to ceph client packages:
<li>Install relevant repositories for access to ceph client packages:
<pre>ceph-stable (nautilus is current as of this writting)
    https://docs.ceph.com/en/nautilus/install/get-packages/
epel (sudo yum install epel-release)


<!--T:12-->
{{File
</pre></li>
  |name=/etc/yum.repos.d/ceph.repo
<li>Install packages to enable the ceph client on all the instances where you plan on mounting the share:
  |lang="ini"
<pre>libcephfs2
  |contents=
python3-cephfs
[Ceph]
ceph-common
name=Ceph packages for $basearch
python3-ceph-argparse
baseurl=http://download.ceph.com/rpm-quincy/el8/$basearch
ceph-fuse (only if you intend a fuse mount)
enabled=1
</pre></li></ol>
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
 
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-quincy/el8/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
 
[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-quincy/el8/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
}}
 
You can now install client and its denpendencies:
dnf install -y libcephfs2 python3-cephfs ceph-common python3-ceph-argparse
 


<!--T:13-->
<ul>
<ul>
<li><p>Debian family (Debian, Ubuntu, Mint, etc.):</p></li></ul>
<li><p>Debian family (Debian, Ubuntu, Mint, etc.):</p></li></ul>
You can get the repository one you have figured out your distro <code>{codename}</code> with <code>lsb_release -sc</code>


<!--T:14-->
<pre>    sudo apt-add-repository 'deb https://download.ceph.com/debian-quincy/ {codename} main'
<pre>    https://docs.ceph.com/en/nautilus/install/get-packages/
</pre></li></ul>
</li>
<li><p>Configure Keys:</p>
<ul>
<li><p>Create two files in your instance, each containing the access key. This key can be found in the rule definition, or in the <i>Access Rules</i> section of your share definition.</p></li>
<li><p>File 1: /etc/ceph/client.fullkey.shareName (e.g. client.fullkey.def-project-shareName-read-write)</p>
<ul>
<li>contents:
<pre>[client.shareName]
    key = AccessKey
</pre></li></ul>
</pre></li></ul>
</li>
</li>
<li><p>File 2: /etc/ceph/client.keyonly.shareName (e.g <i>client.keyonly.def-project-shareName-read-write</i>)</p>
<li><p>Configure ceph client:</p>
<ul>
<li>contents:
<pre>AccessKey
</pre></li>
<li>This file only contains the access key.</li></ul>
</li>
<li><p>Own these files correctly to protect the key information:</p>
<ul>
<li>Each file should be own to root</li></ul>
 
<!--T:15-->
<pre>sudo chown root.root filename
</pre>
<ul>
<li>Each file should be only readable by root</li></ul>


<!--T:16-->
First create a ceoh.conf file Note the different Mon host for the different cloud.
<pre>sudo chmod 600 filename
<tabs>
</pre></li></ul>
<tab name="Arbutus">
</li>
{{File
<li><p>Create <code>/etc/ceph/ceph.conf</code> with contents:</p>
  |name=/etc/ceph/ceph.conf
<pre>[client]
  |lang="ini"
  |contents=
[client]
     client quota = true
     client quota = true
     mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789
     mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789
    keyring = /etc/ceph/client.keyonly.def-project-sharename-read-write
}}
</pre>
</tab>
<ul>
<tab name="SD4H/Juno">
{{File
  |name=/etc/ceph/ceph.conf
  |lang="ini"
  |contents=
[global]
admin socket = /var/run/ceph/$cluster-$name-$pid.asok
client reconnect stale = true
debug client = 0/2
fuse big writes = true
mon host = 10.65.0.10:6789,10.65.0.12:6789,10.65.0.11:6789
[client]
quota = true
}}
</tab>
</tabs>
 
 
<li>Note: these are the monitors for the Arbutus cluster. If connecting to a different cluster, you will need the monitor information specific to that cluster.
<li>Note: these are the monitors for the Arbutus cluster. If connecting to a different cluster, you will need the monitor information specific to that cluster.
<ul>
<ul>

Revision as of 04:13, 29 February 2024

Other languages:


CephFS provides a common filesystem that can be shared amongst multiple OpenStack VM hosts. Access to the service is granted via requests to cloud@tech.alliancecan.ca.

This is a fairly technical procedure that assumes basic Linux skills for creating/editing files, setting permissions, and creating mount points. For assistance in setting up this service, write to cloud@tech.alliancecan.ca.

Procedure

Request access to shares

If you do not already have a quota for the service, you will need to request this through cloud@tech.alliancecan.ca. In your request please provide the following:

  • OpenStack project name
  • amount of quota required in GB
  • number of shares required

Openstack Configuration: Create CephFS share

  1. Create the share.
Configuration of CephFS on Horizon Gui
    • In Project --> Share --> Shares, click on +Create Share.
    • Share Name = enter a name that identifies your project (e.g. project-name-shareName)
    • Share Protocol = CephFS
    • Size = size you need for this share
    • Share Type = cephfs
    • Availability Zone = nova
    • Do not check Make visible for all, otherwise the share will be accessible by all users in all projects.
    • Click on the Create button.
  1. Create an access rule to generate an access key.
    • In Project --> Share --> Shares --> Actions column, select Manage Rules from the drop-down menu.
    • Click on the +Add Rule button (right of page).
    • Access Type = cephx
    • Access Level = select read-write or read-only (you can create multiple rules for either access level if required)
    • Access To = select a key name that describes the key (e.g. MyCephFS-RW)
  2. Note the share details which you will need later.
    • In Project --> Share --> Shares, click on the name of the share.
    • In the Share Overview, note the Path.
    • Under Access Rules, note the Access Key (the access key is approximately 40 characters and ends with the = sign; if you do not see an access key, you probably didn't add an access rule of type cephx.

VM configuration: install and configure CephFS client

  1. Install the required packages.

    • Red Hat family (RHEL, CentOS, Fedora, Rocky, Alma ):

        Check the available releases `https://download.ceph.com/rpm-*` quincy is the right one at this time, you can also look for compatible distro here `https://download.ceph.com/rpm-quincy/`, we will show the full installation for el8
      1. Install relevant repositories for access to ceph client packages:
        File : /etc/yum.repos.d/ceph.repo

        [Ceph]
        name=Ceph packages for $basearch
        baseurl=http://download.ceph.com/rpm-quincy/el8/$basearch
        enabled=1
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asc
        
        [Ceph-noarch]
        name=Ceph noarch packages
        baseurl=http://download.ceph.com/rpm-quincy/el8/noarch
        enabled=1
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asc
        
        [ceph-source]
        name=Ceph source packages
        baseurl=http://download.ceph.com/rpm-quincy/el8/SRPMS
        enabled=1
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asc
        


        You can now install client and its denpendencies:

        dnf install -y libcephfs2 python3-cephfs ceph-common python3-ceph-argparse
        


        • Debian family (Debian, Ubuntu, Mint, etc.):

        You can get the repository one you have figured out your distro {codename} with lsb_release -sc

            sudo apt-add-repository 'deb https://download.ceph.com/debian-quincy/ {codename} main'
        
  2. Configure ceph client:

    First create a ceoh.conf file Note the different Mon host for the different cloud.

    File : /etc/ceph/ceph.conf

    [client]
        client quota = true
        mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789
    


    File : /etc/ceph/ceph.conf

    [global]
    admin socket = /var/run/ceph/$cluster-$name-$pid.asok
    client reconnect stale = true
    debug client = 0/2
    fuse big writes = true
    mon host = 10.65.0.10:6789,10.65.0.12:6789,10.65.0.11:6789
    [client]
    quota = true
    



  3. Note: these are the monitors for the Arbutus cluster. If connecting to a different cluster, you will need the monitor information specific to that cluster.
    • You can find the monitor information in the share details for your share in the Path field.
  4. Retrieve the connection information from the share page for your connection:

    • Open up the share details by clicking the name of the share in the Shares page.
    • Copy the entire path of the share for mounting the filesystem.
  5. Mount the filesystem

    • Create a mount point directory somewhere in your host (likely under /mnt/ - e.g. /mnt/ShareName)
    • Via kernel mount using the ceph driver:
      • Syntax: sudo mount -t ceph <path information> <mountPoint> -o name=<shareKeyName>, secretfile=</path/to/keyringfileOnlyFile>
      • sudo mount -t ceph mon1:6789,mon2:6789,mon3:6789:/volumes/_nogroup/share_instance_id
        • e.g sudo mount -t ceph 192.168.17.13:6789,192.168.17.14:6789,192.168.17.15:6789:/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7 /mnt/WebServerShare -o name=def-project-shareName-read-write,secretfile=/etc/ceph/client.keyonly.def-project-sharename-read-write
    • Via ceph-fuse
      • Need to install ceph-fuse
      • Syntax: sudo ceph-fuse <mountPoint> --id=<shareKeyName> --conf=<pathtoCeph.conf> --keyring=<fullKeyringLocation> --client-mountpoint=pathFromShareDetails
        • e.g. sudo ceph-fuse /mnt/WebServerShare --id=def-project-shareName-read-write --conf=/etc/ceph/ceph.conf --keyring=/etc/ceph/client.fullkey.def-project-shareName-read-write --client-mountpoint=/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7

Notes

  • A particular share can have more than one user key provisioned for it.
    • This allows a more granular access to the filesystem, for example if you needed some hosts to only access the filesystem in a read-only capacity.
    • If you have multiple keys for a share, you can add the extra keys to your host and modify the above mounting procedure.
  • This service is not available to hosts outside of the OpenStack cluster.