CephFS: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Information on how to create keys; directions for how to set up Linux package repositories.)
Line 1: Line 1:
{{Draft}}
{{Draft}}


= User guide to provisioning and deploying CephFS =
= User Guide to Provisioning and Deploying CephFS =


== Introduction ==
== Introduction ==


CephFS provides a common filesystem that can be shared amongst multiple virtual machines. Access to the service is granted via requests to our [[Technical support|cloud support]].
CephFS provides a common filesystem that can be shared amongst multiple openstack vm hosts. Access to the service is granted via requests to cloud@computecanada.ca.


This is a fairly technical procedure that assumes basic linux skills for creating/editing files, setting permissions and creating mount points. Contact your technical resource for your project for assistance in setting up this service.
This is a fairly technical procedure that assumes basic Linux skills for creating/editing files, setting permissions and creating mount points. Contact your technical resource for your project for assistance in setting up this service.


== Technical Procedure ==
== Technical Procedure ==
Line 14: Line 14:


* If you do not already have a quota for the service you will need to request this through cloud@computecanada.ca.
* If you do not already have a quota for the service you will need to request this through cloud@computecanada.ca.
** In your request provide the following:
** In your request please provide the following:
*** Project name
*** OpenStack Project name
*** Expected file system share name.
*** Amount of quota required in GB.
**** This should be the name of the project followed by a unique name for the share (e.g. def-smith-WebServerShare).
*** If more than one share is required, how many are required?
*** Approximate size of share in GB.


=== Create Share ===
=== Create Share ===
Line 24: Line 23:
# Create a share in "Shares" under the "Share" menu:
# Create a share in "Shares" under the "Share" menu:
#* Give this a name that identifies your project: project-name-shareName
#* Give this a name that identifies your project: project-name-shareName
#** e.g. def-smith-WebServerShare
#** e.g. def-project-shareName
#* Share Protocol = cephfs
#* Share Protocol = cephfs
#* Size = size you need for this share
#* Size = size you need for this share
#* Share Type = cephfs
#* Share Type = cephfs
#* Availability Zone = nova
#* Availability Zone = nova
# Provide the name of your share to the cloud team so they can generate a key for you. Do so by replying to the initial request ticket you opened in the first steps.
#* Do not check "Make visible for all", otherwise the share would be accessible by anyone in every project.
#* You will need to provide the following in your request:
# Create an Access Rule which generates an Access Key
#** Share name (e.g. def-smith-WebServerShare).
#* On the "Shares" pane, click on the drop down menu under "Actions" and select "Manage Rules".
#* Create a new rule using the "+Add Rule" button.
#** Access Type = cephx
#** Select "read-write" or "read-only" under "Access Level". You can create multiple rules for either access level if required.
#** Choose a key name in the "Access To" field that describes the key (e.g. def-project-shareName-read-write).
# Note the Share details:
#* Click on the share.
#* Under "Overview", note the "Path" which you will need later.
#* Under "Access Control", note the "Access Key" which you will need later.
#* Access Keys are approximately 40 characters and end with an "=" sign.
#* If you do not see an Access Key, you probably didn't add an Access Rule with an Access Type = cephx


=== Configure Host ===
=== Configure Host ===


<ol>
<ol>
<li><p>Retrieve Key:</p>
<li><p>Install required packages</p>
<ul>
<ul>
<li><p>View the &quot;Share Details&quot; by clicking on the share name in the &quot;Shares&quot; pane.</p>
<li><p>Red Hat Family (RHEL, CentOS, Fedora, Scientific Linux, SUSE, etc.):</p>
<ol>
<li>Install relevant repos for access to ceph client packages:
<pre>ceph-stable (nautilus is current as of this writting)
    https://docs.ceph.com/en/mimic/install/get-packages/
epel (sudo yum install epel-release)
 
</pre></li>
<li>Install packages to enable the ceph client on all the VMs you plan on mounting the share:
<pre>libcephfs2
python-cephfs
ceph-common
python-ceph-argparse
ceph-fuse (only if you intend a fuse mount)
</pre></li></ol>
 
<ul>
<ul>
<li>Key is located under the &quot;Access Rules&quot; section labeled &quot;Access Key&quot;.</li></ul>
<li>Debian Family (Debian, Ubuntu, Mint, etc.):</li></ul>
 
<pre>    https://docs.ceph.com/en/mimic/install/get-packages/
</pre></li></ul>
</li>
</li>
<li><p>Create two files each containing the key provided.</p></li>
<li><p>Configure Keys:</p>
<li><p>File 1: /etc/ceph/client.fullkey.shareName (e.g. client.fullkey.WebServerShare)</p>
<ul>
<li><p>Create two files in your VM each containing the &quot;Access Key&quot;. This key can be found in the rule definition, or in the &quot;Access Rules&quot; section of your share definition.</p></li>
<li><p>File 1: /etc/ceph/client.fullkey.shareName (e.g. client.fullkey.def-project-shareName-read-write)</p>
<ul>
<ul>
<li>contents:
<li>contents:
<pre>[client.shareName]
<pre>[client.shareName]
     key = KeyProvidedByAdmin
     key = AccessKey
</pre></li></ul>
</pre></li></ul>
</li>
</li>
<li><p>File 2: /etc/ceph/client.keyonly.shareName (e.g client.keyonly.WebServerShare)</p>
<li><p>File 2: /etc/ceph/client.keyonly.shareName (e.g client.keyonly.def-project-shareName-read-write)</p>
<ul>
<ul>
<li>contents:
<li>contents:
<pre>KeyProvidedByAdmin
<pre>AccessKey
</pre></li>
</pre></li>
<li>This file only contains the key provided by the admin</li></ul>
<li>This file only contains the Access Key</li></ul>
</li>
</li>
<li><p>Own these files correctly to protect the key information:</p>
<li><p>Own these files correctly to protect the key information:</p>
<ul>
<ul>
<li>Each file should be own to root (sudo chown root.root filename).</li>
<li>Each file should be own to root</li></ul>
<li>Each file should be only readable by root (sudo chmod 600 filename).</li></ul>
 
</li></ul>
<pre>sudo chown root.root filename
</pre>
<ul>
<li>Each file should be only readable by root</li></ul>
 
<pre>sudo chmod 600 filename
</pre></li></ul>
</li>
</li>
<li><p>Create <code>/etc/ceph/ceph.conf</code> with contents:</p>
<li><p>Create <code>/etc/ceph/ceph.conf</code> with contents:</p>
Line 74: Line 109:
</li></ul>
</li></ul>
</li>
</li>
<li><p>Install required packages</p></li></ol>
<li><p>Retrieve the connection information from the share page for your connection:</p>
 
<ul>
<li>Open up the share details by clicking the name of the share in the Shares page.</li>
<li>Copy the entire path of the share for mounting the filesystem.</li></ul>
</li>
<li><p>Mount the filesystem</p>
<ul>
<li>Create mount point directory somewhere in your host (likely under /mnt/ - e.g. /mnt/ShareName)</li>
<li>Via kernel mount using the ceph driver:
<ul>
<li>Syntax: <code>sudo mount -t ceph &lt;path information&gt; &lt;mountPoint&gt; -o name=&lt;shareKeyName&gt;, secretfile=&lt;/path/to/keyringfileOnlyFile&gt;</code></li>
<li><code>sudo mount -t ceph mon1:6789,mon2:6789,mon3:6789:/volumes/_nogroup/share_instance_id </code>
<ul>
<li>e.g <code>sudo mount -t ceph 192.168.17.13:6789,192.168.17.14:6789,192.168.17.15:6789:/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7 /mnt/WebServerShare -o name=def-project-shareName-read-write,secretfile=/etc/ceph/client.keyonly.def-project-sharename-read-write</code></li></ul>
</li></ul>
</li>
<li>Via ceph-fuse
<ul>
<li>Need to install ceph-fuse</li>
<li>Syntax: <code>sudo ceph-fuse &lt;mountPoint&gt; --id=&lt;shareKeyName&gt; --conf=&lt;pathtoCeph.conf&gt; --keyring=&lt;fullKeyringLocation&gt; --client-mountpoint=pathFromShareDetails</code>
<ul>
<ul>
<li><p>Red Hat Family (RHEL, CentOS, Fedora, Scientific Linux, SUSE, etc.): 1. Install relevant repos for access to ceph client packages: <code>ceph-stable (nautilus is current as of this writting) epel</code> 1. Install packages to enable the ceph client on all the VMs you plan on mounting the share: <code>libcephfs2 python-cephfs ceph-common python-ceph-argparse ceph-fuse (only if you intend a fuse mount)</code></p></li>
<li>e.g. <code>sudo ceph-fuse /mnt/WebServerShare --id=def-project-shareName-read-write --conf=/etc/ceph/ceph.conf --keyring=/etc/ceph/client.fullkey.def-project-shareName-read-write --client-mountpoint=/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7</code></li></ul>
<li><p>Debian Family (Debian, Ubuntu, Mint, etc.):</p>
</li></ul>
<pre>sudo apt install ceph-common
</li></ul>
</pre></li></ul>
</li></ol>
 
# Retrieve the connection information from the share page for your connection:
#* Open up the share details by clicking the name of the share in the Shares page.
#* Copy the entire path of the share for mounting the filesystem.
# Mount the filesystem
#* Create mount point directory somewhere in your host (likely under /mnt/ - e.g. /mnt/ShareName)
#* Via kernel mount using the ceph driver:
#** Syntax: <code>sudo mount -t ceph &lt; path information&gt; &lt; mountPoint &gt; -o name=&lt; shareKeyName &gt;, secretfile=&lt; /path/to/keyringfileOnlyFile &gt;</code>
#** <code>sudo mount -t ceph mon1:6789,mon2:6789,mon3:6789:/volumes/_nogroup/share_instance_id </code>
#*** e.g <code>sudo mount -t ceph 192.168.17.13:6789,192.168.17.14:6789,192.168.17.15:6789:/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7 /mnt/WebServerShare -o name=def-smith,secretfile=/etc/ceph/client.keyonly.WebServerShare</code>
#* Via ceph-fuse
#** Need to install ceph-fuse
#** Syntax: <code>sudo ceph-fuse &lt; mountPoint &gt; --id=&lt; projectName &gt; --conf=&lt; pathtoCeph.conf &gt; --keyring=&lt; fullKeyringLocation &gt; --client-mountpoint=pathFromShareDetails</code>
#*** e.g. <code>sudo ceph-fuse /mnt/WebServerShare --id=def-smith --conf=/etc/ceph/ceph.conf --keyring=/etc/ceph/client.fullkey.WebServerShare --client-mountpoint=/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7</code>


== Notes ==
== Notes ==

Revision as of 18:48, 19 August 2021


This article is a draft

This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.




User Guide to Provisioning and Deploying CephFS

Introduction

CephFS provides a common filesystem that can be shared amongst multiple openstack vm hosts. Access to the service is granted via requests to cloud@computecanada.ca.

This is a fairly technical procedure that assumes basic Linux skills for creating/editing files, setting permissions and creating mount points. Contact your technical resource for your project for assistance in setting up this service.

Technical Procedure

Request Access to Shares

  • If you do not already have a quota for the service you will need to request this through cloud@computecanada.ca.
    • In your request please provide the following:
      • OpenStack Project name
      • Amount of quota required in GB.
      • If more than one share is required, how many are required?

Create Share

  1. Create a share in "Shares" under the "Share" menu:
    • Give this a name that identifies your project: project-name-shareName
      • e.g. def-project-shareName
    • Share Protocol = cephfs
    • Size = size you need for this share
    • Share Type = cephfs
    • Availability Zone = nova
    • Do not check "Make visible for all", otherwise the share would be accessible by anyone in every project.
  2. Create an Access Rule which generates an Access Key
    • On the "Shares" pane, click on the drop down menu under "Actions" and select "Manage Rules".
    • Create a new rule using the "+Add Rule" button.
      • Access Type = cephx
      • Select "read-write" or "read-only" under "Access Level". You can create multiple rules for either access level if required.
      • Choose a key name in the "Access To" field that describes the key (e.g. def-project-shareName-read-write).
  3. Note the Share details:
    • Click on the share.
    • Under "Overview", note the "Path" which you will need later.
    • Under "Access Control", note the "Access Key" which you will need later.
    • Access Keys are approximately 40 characters and end with an "=" sign.
    • If you do not see an Access Key, you probably didn't add an Access Rule with an Access Type = cephx

Configure Host

  1. Install required packages

    • Red Hat Family (RHEL, CentOS, Fedora, Scientific Linux, SUSE, etc.):

      1. Install relevant repos for access to ceph client packages:
        ceph-stable (nautilus is current as of this writting)
            https://docs.ceph.com/en/mimic/install/get-packages/
        epel (sudo yum install epel-release)
        
        
      2. Install packages to enable the ceph client on all the VMs you plan on mounting the share:
        libcephfs2
        python-cephfs
        ceph-common
        python-ceph-argparse
        ceph-fuse (only if you intend a fuse mount)
        
      • Debian Family (Debian, Ubuntu, Mint, etc.):
          https://docs.ceph.com/en/mimic/install/get-packages/
      
  2. Configure Keys:

    • Create two files in your VM each containing the "Access Key". This key can be found in the rule definition, or in the "Access Rules" section of your share definition.

    • File 1: /etc/ceph/client.fullkey.shareName (e.g. client.fullkey.def-project-shareName-read-write)

      • contents:
        [client.shareName]
            key = AccessKey
        
    • File 2: /etc/ceph/client.keyonly.shareName (e.g client.keyonly.def-project-shareName-read-write)

      • contents:
        AccessKey
        
      • This file only contains the Access Key
    • Own these files correctly to protect the key information:

      • Each file should be own to root
      sudo chown root.root filename
      
      • Each file should be only readable by root
      sudo chmod 600 filename
      
  3. Create /etc/ceph/ceph.conf with contents:

    [client]
        client quota = true
        mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789
    
    • Note: these are the monitors for the Arbutus cluster - if connecting to a different cluster you will need the monitor information specific to that cluster.
      • You can find the monitor information in the Share Details for your share in the "Path" field.
  4. Retrieve the connection information from the share page for your connection:

    • Open up the share details by clicking the name of the share in the Shares page.
    • Copy the entire path of the share for mounting the filesystem.
  5. Mount the filesystem

    • Create mount point directory somewhere in your host (likely under /mnt/ - e.g. /mnt/ShareName)
    • Via kernel mount using the ceph driver:
      • Syntax: sudo mount -t ceph <path information> <mountPoint> -o name=<shareKeyName>, secretfile=</path/to/keyringfileOnlyFile>
      • sudo mount -t ceph mon1:6789,mon2:6789,mon3:6789:/volumes/_nogroup/share_instance_id
        • e.g sudo mount -t ceph 192.168.17.13:6789,192.168.17.14:6789,192.168.17.15:6789:/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7 /mnt/WebServerShare -o name=def-project-shareName-read-write,secretfile=/etc/ceph/client.keyonly.def-project-sharename-read-write
    • Via ceph-fuse
      • Need to install ceph-fuse
      • Syntax: sudo ceph-fuse <mountPoint> --id=<shareKeyName> --conf=<pathtoCeph.conf> --keyring=<fullKeyringLocation> --client-mountpoint=pathFromShareDetails
        • e.g. sudo ceph-fuse /mnt/WebServerShare --id=def-project-shareName-read-write --conf=/etc/ceph/ceph.conf --keyring=/etc/ceph/client.fullkey.def-project-shareName-read-write --client-mountpoint=/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7

Notes

  • A particular share can have more than one user key provisioned for it.
    • This allows a more granular access to the filesystem.
    • For example, if you needed some hosts to only access the filesystem in a read only capacity.
    • If you have multiple keys for a share you can add the extra keys to your host and modify the above mounting procedure.
  • This service is not available to hosts outside of the Openstack cluster.