CephFS: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
Line 29: Line 29:
#* ''Share Type'' = cephfs
#* ''Share Type'' = cephfs
#* ''Availability Zone'' = nova
#* ''Availability Zone'' = nova
#* Do not check ''Make visible for all'', otherwise the share will be accessible by all users in all projects
#* Do not check ''Make visible for all'', otherwise the share will be accessible by all users in all projects.
# Create an access rule to generate an access key.
# Create an access rule to generate an access key.
#* In ''Project --> Share --> Shares --> Actions'' column, select ''Manage Rules'' from the dropdown menu.  
#* In ''Project --> Share --> Shares --> Actions'' column, select ''Manage Rules'' from the dropdown menu.  
Line 36: Line 36:
#* ''Access Level'' = select ''read-write'' or ''read-only'' (you can create multiple rules for either access level if required)
#* ''Access Level'' = select ''read-write'' or ''read-only'' (you can create multiple rules for either access level if required)
#* ''Access To'' = select a key name that describes the key (e.g. ''def-project-shareName-read-write'')
#* ''Access To'' = select a key name that describes the key (e.g. ''def-project-shareName-read-write'')
# Note the share details.
# Note the share details
#* In ''Project --> Share --> Shares'', click on the name of the share.
#* In ''Project --> Share --> Shares'', click on the name of the share.
#* In the ''Share Overview'', note the ''Path'' which you will need later.
#* In the ''Share Overview'', note the ''Path'' which you will need later.
Line 66: Line 66:
<!--T:13-->
<!--T:13-->
<ul>
<ul>
<li>Debian family (Debian, Ubuntu, Mint, etc.):</li></ul>
<li><p>Debian family (Debian, Ubuntu, Mint, etc.):</li></ul>


<!--T:14-->
<!--T:14-->
Line 82: Line 82:
</pre></li></ul>
</pre></li></ul>
</li>
</li>
<li><p>File 2: /etc/ceph/client.keyonly.shareName (e.g client.keyonly.def-project-shareName-read-write)</p>
<li><p>File 2: /etc/ceph/client.keyonly.shareName (e.g ''client.keyonly.def-project-shareName-read-write'')</p>
<ul>
<ul>
<li>contents:
<li>contents:
<pre>AccessKey
<pre>AccessKey
</pre></li>
</pre></li>
<li>This file only contains the Access Key</li></ul>
<li>This file only contains the access key.</li></ul>
</li>
</li>
<li><p>Own these files correctly to protect the key information:</p>
<li><p>Own these files correctly to protect the key information:</p>
Line 109: Line 109:
</pre>
</pre>
<ul>
<ul>
<li>Note: these are the monitors for the Arbutus cluster - if connecting to a different cluster you will need the monitor information specific to that cluster.
<li>Note: these are the monitors for the Arbutus cluster. If connecting to a different cluster, you will need the monitor information specific to that cluster.
<ul>
<ul>
<li>You can find the monitor information in the Share Details for your share in the &quot;Path&quot; field.</li></ul>
<li>You can find the monitor information in the share details for your share in the ''Path'' field.</li></ul>
</li></ul>
</li></ul>
</li>
</li>
<li><p>Retrieve the connection information from the share page for your connection:</p>
<li><p>Retrieve the connection information from the share page for your connection:</p>
<ul>
<ul>
<li>Open up the share details by clicking the name of the share in the Shares page.</li>
<li>Open up the share details by clicking the name of the share in the ''Shares'' page.</li>
<li>Copy the entire path of the share for mounting the filesystem.</li></ul>
<li>Copy the entire path of the share for mounting the filesystem.</li></ul>
</li>
</li>
<li><p>Mount the filesystem</p>
<li><p>Mount the filesystem</p>
<ul>
<ul>
<li>Create mount point directory somewhere in your host (likely under /mnt/ - e.g. /mnt/ShareName)</li>
<li>Create a mount point directory somewhere in your host (likely under /mnt/ - e.g. /mnt/ShareName)</li>
<li>Via kernel mount using the ceph driver:
<li>Via kernel mount using the ceph driver:
<ul>
<ul>
Line 144: Line 144:
<!--T:18-->
<!--T:18-->
* A particular share can have more than one user key provisioned for it.
* A particular share can have more than one user key provisioned for it.
** This allows a more granular access to the filesystem.
** This allows a more granular access to the filesystem, for example if you needed some hosts to only access the filesystem in a read-only capacity.
** For example, if you needed some hosts to only access the filesystem in a read only capacity.
** If you have multiple keys for a share, you can add the extra keys to your host and modify the above mounting procedure.
** If you have multiple keys for a share you can add the extra keys to your host and modify the above mounting procedure.
* This service is not available to hosts outside of the OpenStack cluster.
* This service is not available to hosts outside of the Openstack cluster.


<!--T:19-->
<!--T:19-->

Revision as of 17:47, 2 November 2022

Other languages:


CephFS provides a common filesystem that can be shared amongst multiple OpenStack VM hosts. Access to the service is granted via requests to cloud@tech.alliancecan.ca.

This is a fairly technical procedure that assumes basic Linux skills for creating/editing files, setting permissions, and creating mount points. For assistance in setting up this service, write to cloud@tech.alliancecan.ca.

Procedure

Request Access to Shares

If you do not already have a quota for the service you will need to request this through cloud@tech.alliancecan.ca. In your request please provide the following:

  • OpenStack project name
  • amount of quota required in GB
  • number of shares required

Create Share

  1. Create the share.
    • In Project --> Share --> Shares, click on +Create Share.
    • Share Name = enter a name that identifies your project (e.g. project-name-shareName)
    • Share Protocol = CephFS
    • Size = size you need for this share
    • Share Type = cephfs
    • Availability Zone = nova
    • Do not check Make visible for all, otherwise the share will be accessible by all users in all projects.
  2. Create an access rule to generate an access key.
    • In Project --> Share --> Shares --> Actions column, select Manage Rules from the dropdown menu.
    • Click on the +Add Rule button (right of page).
    • Access Type = cephx
    • Access Level = select read-write or read-only (you can create multiple rules for either access level if required)
    • Access To = select a key name that describes the key (e.g. def-project-shareName-read-write)
  3. Note the share details
    • In Project --> Share --> Shares, click on the name of the share.
    • In the Share Overview, note the Path which you will need later.
    • Under Access Rules, note the Access Key which you will need later (the access key is approximately 40 characters and ends with the =; sign; if you do not see an access key, you probably didn't add an access rule of type cephx.

Configure Host

  1. Install the required packages.

    • Red Hat family (RHEL, CentOS, Fedora, Scientific Linux, SUSE, etc.):

      1. Install relevant repositories for access to ceph client packages:
        ceph-stable (nautilus is current as of this writting)
            https://docs.ceph.com/en/nautilus/install/get-packages/
        epel (sudo yum install epel-release)
        
        
      2. Install packages to enable the ceph client on all the instances where you plan on mounting the share:
        libcephfs2
        python-cephfs
        ceph-common
        python-ceph-argparse
        ceph-fuse (only if you intend a fuse mount)
        
      • Debian family (Debian, Ubuntu, Mint, etc.):

          https://docs.ceph.com/en/nautilus/install/get-packages/
      
  2. Configure Keys:

    • Create two files in your instance, each containing the access key. This key can be found in the rule definition, or in the Access Rules section of your share definition.

    • File 1: /etc/ceph/client.fullkey.shareName (e.g. client.fullkey.def-project-shareName-read-write)

      • contents:
        [client.shareName]
            key = AccessKey
        
    • File 2: /etc/ceph/client.keyonly.shareName (e.g client.keyonly.def-project-shareName-read-write)

      • contents:
        AccessKey
        
      • This file only contains the access key.
    • Own these files correctly to protect the key information:

      • Each file should be own to root
      sudo chown root.root filename
      
      • Each file should be only readable by root
      sudo chmod 600 filename
      
  3. Create /etc/ceph/ceph.conf with contents:

    [client]
        client quota = true
        mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789
    
    • Note: these are the monitors for the Arbutus cluster. If connecting to a different cluster, you will need the monitor information specific to that cluster.
      • You can find the monitor information in the share details for your share in the Path field.
  4. Retrieve the connection information from the share page for your connection:

    • Open up the share details by clicking the name of the share in the Shares page.
    • Copy the entire path of the share for mounting the filesystem.
  5. Mount the filesystem

    • Create a mount point directory somewhere in your host (likely under /mnt/ - e.g. /mnt/ShareName)
    • Via kernel mount using the ceph driver:
      • Syntax: sudo mount -t ceph <path information> <mountPoint> -o name=<shareKeyName>, secretfile=</path/to/keyringfileOnlyFile>
      • sudo mount -t ceph mon1:6789,mon2:6789,mon3:6789:/volumes/_nogroup/share_instance_id
        • e.g sudo mount -t ceph 192.168.17.13:6789,192.168.17.14:6789,192.168.17.15:6789:/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7 /mnt/WebServerShare -o name=def-project-shareName-read-write,secretfile=/etc/ceph/client.keyonly.def-project-sharename-read-write
    • Via ceph-fuse
      • Need to install ceph-fuse
      • Syntax: sudo ceph-fuse <mountPoint> --id=<shareKeyName> --conf=<pathtoCeph.conf> --keyring=<fullKeyringLocation> --client-mountpoint=pathFromShareDetails
        • e.g. sudo ceph-fuse /mnt/WebServerShare --id=def-project-shareName-read-write --conf=/etc/ceph/ceph.conf --keyring=/etc/ceph/client.fullkey.def-project-shareName-read-write --client-mountpoint=/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7

Notes

  • A particular share can have more than one user key provisioned for it.
    • This allows a more granular access to the filesystem, for example if you needed some hosts to only access the filesystem in a read-only capacity.
    • If you have multiple keys for a share, you can add the extra keys to your host and modify the above mounting procedure.
  • This service is not available to hosts outside of the OpenStack cluster.