https://docs.alliancecan.ca/mediawiki/api.php?action=feedcontributions&user=Dcgriff&feedformat=atomAlliance Doc - User contributions [en]2024-03-28T20:45:22ZUser contributionsMediaWiki 1.39.6https://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_user_documentation&diff=115469Arbutus user documentation2022-05-09T19:14:52Z<p>Dcgriff: </p>
<hr />
<div>{{Draft}}<br />
== Arbutus Documentation ==<br />
<br />
We keep a curated collection of user facing documents here for the Arbutus community to reference. <br />
<br />
Please let us know through cloud@computecanada.ca if there are documents you would like to see here or that need to be updated or changed to reflect the shifting technology landscape.<br />
<br />
While we have a lot of specific documentation for Arbutus, some of this may translate over to other cloud sites as well. <br />
<br />
==== Openstack ====<br />
<br />
[[VM recovery via cloud console]]<br />
<br />
==== Storage ====<br />
[[Arbutus CephFS]]<br />
<br />
[[Arbutus Object Storage]]<br />
<br />
==== General ====<br />
<br />
[[Category:CC-Cloud]]</div>Dcgriffhttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_CephFS_user_guide&diff=114217Arbutus CephFS user guide2022-04-14T22:39:07Z<p>Dcgriff: Dcgriff moved page Arbutus CephFS user guide to Arbutus CephFS</p>
<hr />
<div>#REDIRECT [[Arbutus CephFS]]</div>Dcgriffhttps://docs.alliancecan.ca/mediawiki/index.php?title=CephFS&diff=114216CephFS2022-04-14T22:39:07Z<p>Dcgriff: Dcgriff moved page Arbutus CephFS user guide to Arbutus CephFS</p>
<hr />
<div>= User Guide to Provisioning and Deploying CephFS =<br />
<br />
== Introduction ==<br />
<br />
CephFS provides a common filesystem that can be shared amongst multiple openstack vm hosts. Access to the service is granted via requests to cloud@computecanada.ca.<br />
<br />
This is a fairly technical procedure that assumes basic Linux skills for creating/editing files, setting permissions and creating mount points. Contact your technical resource for your project for assistance in setting up this service.<br />
<br />
== Technical Procedure ==<br />
<br />
=== Request Access to Shares ===<br />
<br />
* If you do not already have a quota for the service you will need to request this through cloud@computecanada.ca.<br />
** In your request please provide the following:<br />
*** OpenStack Project name<br />
*** Amount of quota required in GB.<br />
*** If more than one share is required, how many are required?<br />
<br />
=== Create Share ===<br />
<br />
# Create a share in &quot;Shares&quot; under the &quot;Share&quot; menu:<br />
#* Give this a name that identifies your project: project-name-shareName<br />
#** e.g. def-project-shareName<br />
#* Share Protocol = cephfs<br />
#* Size = size you need for this share<br />
#* Share Type = cephfs<br />
#* Availability Zone = nova<br />
#* Do not check &quot;Make visible for all&quot;, otherwise the share would be accessible by anyone in every project.<br />
# Create an Access Rule which generates an Access Key<br />
#* On the &quot;Shares&quot; pane, click on the drop down menu under &quot;Actions&quot; and select &quot;Manage Rules&quot;.<br />
#* Create a new rule using the &quot;+Add Rule&quot; button.<br />
#** Access Type = cephx<br />
#** Select &quot;read-write&quot; or &quot;read-only&quot; under &quot;Access Level&quot;. You can create multiple rules for either access level if required.<br />
#** Choose a key name in the &quot;Access To&quot; field that describes the key (e.g. def-project-shareName-read-write).<br />
# Note the Share details:<br />
#* Click on the share.<br />
#* Under &quot;Overview&quot;, note the &quot;Path&quot; which you will need later.<br />
#* Under &quot;Access Control&quot;, note the &quot;Access Key&quot; which you will need later.<br />
#* Access Keys are approximately 40 characters and end with an &quot;=&quot; sign.<br />
#* If you do not see an Access Key, you probably didn't add an Access Rule with an Access Type = cephx<br />
<br />
=== Configure Host ===<br />
<br />
<ol><br />
<li><p>Install required packages</p><br />
<ul><br />
<li><p>Red Hat Family (RHEL, CentOS, Fedora, Scientific Linux, SUSE, etc.):</p><br />
<ol><br />
<li>Install relevant repos for access to ceph client packages:<br />
<pre>ceph-stable (nautilus is current as of this writting)<br />
https://docs.ceph.com/en/nautilus/install/get-packages/<br />
epel (sudo yum install epel-release)<br />
<br />
</pre></li><br />
<li>Install packages to enable the ceph client on all the VMs you plan on mounting the share:<br />
<pre>libcephfs2<br />
python-cephfs<br />
ceph-common<br />
python-ceph-argparse<br />
ceph-fuse (only if you intend a fuse mount)<br />
</pre></li></ol><br />
<br />
<ul><br />
<li>Debian Family (Debian, Ubuntu, Mint, etc.):</li></ul><br />
<br />
<pre> https://docs.ceph.com/en/nautilus/install/get-packages/<br />
</pre></li></ul><br />
</li><br />
<li><p>Configure Keys:</p><br />
<ul><br />
<li><p>Create two files in your VM each containing the &quot;Access Key&quot;. This key can be found in the rule definition, or in the &quot;Access Rules&quot; section of your share definition.</p></li><br />
<li><p>File 1: /etc/ceph/client.fullkey.shareName (e.g. client.fullkey.def-project-shareName-read-write)</p><br />
<ul><br />
<li>contents:<br />
<pre>[client.shareName]<br />
key = AccessKey<br />
</pre></li></ul><br />
</li><br />
<li><p>File 2: /etc/ceph/client.keyonly.shareName (e.g client.keyonly.def-project-shareName-read-write)</p><br />
<ul><br />
<li>contents:<br />
<pre>AccessKey<br />
</pre></li><br />
<li>This file only contains the Access Key</li></ul><br />
</li><br />
<li><p>Own these files correctly to protect the key information:</p><br />
<ul><br />
<li>Each file should be own to root</li></ul><br />
<br />
<pre>sudo chown root.root filename<br />
</pre><br />
<ul><br />
<li>Each file should be only readable by root</li></ul><br />
<br />
<pre>sudo chmod 600 filename<br />
</pre></li></ul><br />
</li><br />
<li><p>Create <code>/etc/ceph/ceph.conf</code> with contents:</p><br />
<pre>[client]<br />
client quota = true<br />
mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789<br />
</pre><br />
<ul><br />
<li>Note: these are the monitors for the Arbutus cluster - if connecting to a different cluster you will need the monitor information specific to that cluster.<br />
<ul><br />
<li>You can find the monitor information in the Share Details for your share in the &quot;Path&quot; field.</li></ul><br />
</li></ul><br />
</li><br />
<li><p>Retrieve the connection information from the share page for your connection:</p><br />
<ul><br />
<li>Open up the share details by clicking the name of the share in the Shares page.</li><br />
<li>Copy the entire path of the share for mounting the filesystem.</li></ul><br />
</li><br />
<li><p>Mount the filesystem</p><br />
<ul><br />
<li>Create mount point directory somewhere in your host (likely under /mnt/ - e.g. /mnt/ShareName)</li><br />
<li>Via kernel mount using the ceph driver:<br />
<ul><br />
<li>Syntax: <code>sudo mount -t ceph &lt;path information&gt; &lt;mountPoint&gt; -o name=&lt;shareKeyName&gt;, secretfile=&lt;/path/to/keyringfileOnlyFile&gt;</code></li><br />
<li><code>sudo mount -t ceph mon1:6789,mon2:6789,mon3:6789:/volumes/_nogroup/share_instance_id </code><br />
<ul><br />
<li>e.g <code>sudo mount -t ceph 192.168.17.13:6789,192.168.17.14:6789,192.168.17.15:6789:/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7 /mnt/WebServerShare -o name=def-project-shareName-read-write,secretfile=/etc/ceph/client.keyonly.def-project-sharename-read-write</code></li></ul><br />
</li></ul><br />
</li><br />
<li>Via ceph-fuse<br />
<ul><br />
<li>Need to install ceph-fuse</li><br />
<li>Syntax: <code>sudo ceph-fuse &lt;mountPoint&gt; --id=&lt;shareKeyName&gt; --conf=&lt;pathtoCeph.conf&gt; --keyring=&lt;fullKeyringLocation&gt; --client-mountpoint=pathFromShareDetails</code><br />
<ul><br />
<li>e.g. <code>sudo ceph-fuse /mnt/WebServerShare --id=def-project-shareName-read-write --conf=/etc/ceph/ceph.conf --keyring=/etc/ceph/client.fullkey.def-project-shareName-read-write --client-mountpoint=/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7</code></li></ul><br />
</li></ul><br />
</li></ul><br />
</li></ol><br />
<br />
== Notes ==<br />
<br />
* A particular share can have more than one user key provisioned for it.<br />
** This allows a more granular access to the filesystem.<br />
** For example, if you needed some hosts to only access the filesystem in a read only capacity.<br />
** If you have multiple keys for a share you can add the extra keys to your host and modify the above mounting procedure.<br />
* This service is not available to hosts outside of the Openstack cluster.<br />
<br />
[[Category:CC-Cloud]]</div>Dcgriffhttps://docs.alliancecan.ca/mediawiki/index.php?title=CephFS&diff=114190CephFS2022-04-14T22:07:30Z<p>Dcgriff: </p>
<hr />
<div>= User Guide to Provisioning and Deploying CephFS =<br />
<br />
== Introduction ==<br />
<br />
CephFS provides a common filesystem that can be shared amongst multiple openstack vm hosts. Access to the service is granted via requests to cloud@computecanada.ca.<br />
<br />
This is a fairly technical procedure that assumes basic Linux skills for creating/editing files, setting permissions and creating mount points. Contact your technical resource for your project for assistance in setting up this service.<br />
<br />
== Technical Procedure ==<br />
<br />
=== Request Access to Shares ===<br />
<br />
* If you do not already have a quota for the service you will need to request this through cloud@computecanada.ca.<br />
** In your request please provide the following:<br />
*** OpenStack Project name<br />
*** Amount of quota required in GB.<br />
*** If more than one share is required, how many are required?<br />
<br />
=== Create Share ===<br />
<br />
# Create a share in &quot;Shares&quot; under the &quot;Share&quot; menu:<br />
#* Give this a name that identifies your project: project-name-shareName<br />
#** e.g. def-project-shareName<br />
#* Share Protocol = cephfs<br />
#* Size = size you need for this share<br />
#* Share Type = cephfs<br />
#* Availability Zone = nova<br />
#* Do not check &quot;Make visible for all&quot;, otherwise the share would be accessible by anyone in every project.<br />
# Create an Access Rule which generates an Access Key<br />
#* On the &quot;Shares&quot; pane, click on the drop down menu under &quot;Actions&quot; and select &quot;Manage Rules&quot;.<br />
#* Create a new rule using the &quot;+Add Rule&quot; button.<br />
#** Access Type = cephx<br />
#** Select &quot;read-write&quot; or &quot;read-only&quot; under &quot;Access Level&quot;. You can create multiple rules for either access level if required.<br />
#** Choose a key name in the &quot;Access To&quot; field that describes the key (e.g. def-project-shareName-read-write).<br />
# Note the Share details:<br />
#* Click on the share.<br />
#* Under &quot;Overview&quot;, note the &quot;Path&quot; which you will need later.<br />
#* Under &quot;Access Control&quot;, note the &quot;Access Key&quot; which you will need later.<br />
#* Access Keys are approximately 40 characters and end with an &quot;=&quot; sign.<br />
#* If you do not see an Access Key, you probably didn't add an Access Rule with an Access Type = cephx<br />
<br />
=== Configure Host ===<br />
<br />
<ol><br />
<li><p>Install required packages</p><br />
<ul><br />
<li><p>Red Hat Family (RHEL, CentOS, Fedora, Scientific Linux, SUSE, etc.):</p><br />
<ol><br />
<li>Install relevant repos for access to ceph client packages:<br />
<pre>ceph-stable (nautilus is current as of this writting)<br />
https://docs.ceph.com/en/nautilus/install/get-packages/<br />
epel (sudo yum install epel-release)<br />
<br />
</pre></li><br />
<li>Install packages to enable the ceph client on all the VMs you plan on mounting the share:<br />
<pre>libcephfs2<br />
python-cephfs<br />
ceph-common<br />
python-ceph-argparse<br />
ceph-fuse (only if you intend a fuse mount)<br />
</pre></li></ol><br />
<br />
<ul><br />
<li>Debian Family (Debian, Ubuntu, Mint, etc.):</li></ul><br />
<br />
<pre> https://docs.ceph.com/en/nautilus/install/get-packages/<br />
</pre></li></ul><br />
</li><br />
<li><p>Configure Keys:</p><br />
<ul><br />
<li><p>Create two files in your VM each containing the &quot;Access Key&quot;. This key can be found in the rule definition, or in the &quot;Access Rules&quot; section of your share definition.</p></li><br />
<li><p>File 1: /etc/ceph/client.fullkey.shareName (e.g. client.fullkey.def-project-shareName-read-write)</p><br />
<ul><br />
<li>contents:<br />
<pre>[client.shareName]<br />
key = AccessKey<br />
</pre></li></ul><br />
</li><br />
<li><p>File 2: /etc/ceph/client.keyonly.shareName (e.g client.keyonly.def-project-shareName-read-write)</p><br />
<ul><br />
<li>contents:<br />
<pre>AccessKey<br />
</pre></li><br />
<li>This file only contains the Access Key</li></ul><br />
</li><br />
<li><p>Own these files correctly to protect the key information:</p><br />
<ul><br />
<li>Each file should be own to root</li></ul><br />
<br />
<pre>sudo chown root.root filename<br />
</pre><br />
<ul><br />
<li>Each file should be only readable by root</li></ul><br />
<br />
<pre>sudo chmod 600 filename<br />
</pre></li></ul><br />
</li><br />
<li><p>Create <code>/etc/ceph/ceph.conf</code> with contents:</p><br />
<pre>[client]<br />
client quota = true<br />
mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789<br />
</pre><br />
<ul><br />
<li>Note: these are the monitors for the Arbutus cluster - if connecting to a different cluster you will need the monitor information specific to that cluster.<br />
<ul><br />
<li>You can find the monitor information in the Share Details for your share in the &quot;Path&quot; field.</li></ul><br />
</li></ul><br />
</li><br />
<li><p>Retrieve the connection information from the share page for your connection:</p><br />
<ul><br />
<li>Open up the share details by clicking the name of the share in the Shares page.</li><br />
<li>Copy the entire path of the share for mounting the filesystem.</li></ul><br />
</li><br />
<li><p>Mount the filesystem</p><br />
<ul><br />
<li>Create mount point directory somewhere in your host (likely under /mnt/ - e.g. /mnt/ShareName)</li><br />
<li>Via kernel mount using the ceph driver:<br />
<ul><br />
<li>Syntax: <code>sudo mount -t ceph &lt;path information&gt; &lt;mountPoint&gt; -o name=&lt;shareKeyName&gt;, secretfile=&lt;/path/to/keyringfileOnlyFile&gt;</code></li><br />
<li><code>sudo mount -t ceph mon1:6789,mon2:6789,mon3:6789:/volumes/_nogroup/share_instance_id </code><br />
<ul><br />
<li>e.g <code>sudo mount -t ceph 192.168.17.13:6789,192.168.17.14:6789,192.168.17.15:6789:/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7 /mnt/WebServerShare -o name=def-project-shareName-read-write,secretfile=/etc/ceph/client.keyonly.def-project-sharename-read-write</code></li></ul><br />
</li></ul><br />
</li><br />
<li>Via ceph-fuse<br />
<ul><br />
<li>Need to install ceph-fuse</li><br />
<li>Syntax: <code>sudo ceph-fuse &lt;mountPoint&gt; --id=&lt;shareKeyName&gt; --conf=&lt;pathtoCeph.conf&gt; --keyring=&lt;fullKeyringLocation&gt; --client-mountpoint=pathFromShareDetails</code><br />
<ul><br />
<li>e.g. <code>sudo ceph-fuse /mnt/WebServerShare --id=def-project-shareName-read-write --conf=/etc/ceph/ceph.conf --keyring=/etc/ceph/client.fullkey.def-project-shareName-read-write --client-mountpoint=/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7</code></li></ul><br />
</li></ul><br />
</li></ul><br />
</li></ol><br />
<br />
== Notes ==<br />
<br />
* A particular share can have more than one user key provisioned for it.<br />
** This allows a more granular access to the filesystem.<br />
** For example, if you needed some hosts to only access the filesystem in a read only capacity.<br />
** If you have multiple keys for a share you can add the extra keys to your host and modify the above mounting procedure.<br />
* This service is not available to hosts outside of the Openstack cluster.<br />
<br />
[[Category:CC-Cloud]]</div>Dcgriffhttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage_clients&diff=114124Arbutus object storage clients2022-04-14T15:19:17Z<p>Dcgriff: </p>
<hr />
<div>For information on obtaining Arbutus Object Storage, please see the [[Arbutus_Object_Storage_User_Guide|Object Storage User Guide]]. This page describes how to configure and use two common object storage clients:<br />
# s3cmd<br />
# WinSCP<br />
<br />
It is important to note that Arbutus' Object Storage solution does not use Amazon's [https://documentation.help/s3-dg-20060301/VirtualHosting.html S3 Virtual Hosting] (i.e. DNS-based bucket) approach which these clients assume by default. They need to be configured not to use that approach as described below.<br />
<br />
== s3cmd ==<br />
=== Installing s3cmd ===<br />
Depending on your Linux distribution, the <code>s3cmd</code> command can be installed using the appropriate <code>yum</code> (RHEL, CentOS) or <code>apt</code> (Debian, Ubuntu) command:<br />
<br />
<code>$ sudo yum install s3cmd</code><br/><br />
<code>$ sudo apt install s3cmd </code><br />
<br />
=== Configuring s3cmd ===<br />
To configure the <code>s3cmd</code> tool use the command:</br><br />
<code>$ s3cmd --configure</code><br />
<br />
And make the following configurations with the keys provided by the Arbutus team:<br />
<pre><br />
Enter new values or accept defaults in brackets with Enter.<br />
Refer to user manual for detailed description of all options.<br />
<br />
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.<br />
Access Key []: 20_DIGIT_ACCESS_KEY<br />
Secret Key []: 40_DIGIT_SECRET_KEY<br />
Default Region [US]:<br />
<br />
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.<br />
S3 Endpoint []: object-arbutus.cloud.computecanada.ca<br />
<br />
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used<br />
if the target S3 system supports dns based buckets.<br />
DNS-style bucket+hostname:port template for accessing a bucket []: object-arbutus.cloud.computecanada.ca<br />
<br />
Encryption password is used to protect your files from reading<br />
by unauthorized persons while in transfer to S3<br />
Encryption password []: PASSWORD<br />
Path to GPG program []: /usr/bin/gpg<br />
<br />
When using secure HTTPS protocol all communication with Amazon S3<br />
servers is protected from 3rd party eavesdropping. This method is<br />
slower than plain HTTP, and can only be proxied with Python 2.7 or newer<br />
Use HTTPS protocol []: Yes<br />
<br />
On some networks all internet access must go through a HTTP proxy.<br />
Try setting it here if you can't connect to S3 directly<br />
HTTP Proxy server name:<br />
</pre><br />
<br />
=== Create buckets ===<br />
The next task is to make a bucket. Buckets contain files. Bucket names must be globally unique across the Arbutus object storage solution. Therefore, you will need to create a uniquely named bucket which will not conflict with other users. For example, the buckets "s3://test/" and "s3://data/" are likely already taken. Consider creating buckets reflective of your project, for example "s3://def-test-bucket1" or "s3://atlas_project_bucket". Valid bucket names may only use the upper case characters, lower case characters, digits, periods, dashes, and underscores (i.e. A-Z, a-z, 0-9, ., -, and _ ).<br />
<br />
To create a bucket, use the tool's <code>mb</code> (make bucket) command:<br />
<br />
<code>$ s3cmd mb s3://BUCKET_NAME/</code><br />
<br />
To see the status of a bucket, use the <code>info</code> command:<br />
<br />
<code>$ s3cmd info s3://BUCKET_NAME/</code><br />
<br />
The output will look something like this:<br />
<br />
<pre><br />
s3://BUCKET_NAME/ (bucket):<br />
Location: default<br />
Payer: BucketOwner<br />
Expiration Rule: none<br />
Policy: none<br />
CORS: none<br />
ACL: *anon*: READ<br />
ACL: USER: FULL_CONTROL<br />
URL: http://object-arbutus.cloud.computecanada.ca/BUCKET_NAME/<br />
</pre><br />
<br />
=== Upload files ===<br />
To upload a file to the bucket, use the <code>put</code> command similar to this:<br />
<br />
<code>$ s3cmd put --guess-mime-type FILE_NAME.dat s3://BUCKET_NAME/FILE_NAME.dat</code><br />
<br />
Where the bucket name and the file name are specified. Multipurpose Internet Mail Extensions (MIME) is a mechanism for handling files based on their type. The <code>--guess-mime-type</code> command parameter will guess the MIME type based on the file extension. The default MIME type is <code>binary/octet-stream</code>.<br />
<br />
=== Delete File ===<br />
To delete a file from the bucket, use the <code>rm</code> command similar to this:<br/><br />
<code>$ s3cmd rm s3://BUCKET_NAME/FILE_NAME.dat</code><br />
<br />
=== ACLs and Policies ===<br />
Buckets can have Access Control Lists (ACLs) and policies which govern who can access what resources in the object store. These features are quite sophisticated. Here are two simple examples of using ACLs using the tool's <code>setacl</code> command.<br />
<br />
<code>$ s3cmd setacl --acl-public -r s3://BUCKET_NAME/</code><br />
<br />
The result of this command is that the public can access the bucket and recursively (-r) every file in the bucket. Files can be accessed via URLs such as<br/><br />
https://object-arbutus.cloud.computecanada.ca/BUCKET_NAME/FILE_NAME.dat.<br />
<br />
The second ACL example limits access to the bucket to only the owner:<br />
<br />
<code>$ s3cmd setacl --acl-private s3://BUCKET_NAME/</code><br />
<br />
Other more sophisticated examples can be found in the s3cmd man page.<br />
<br />
== WinSCP ==<br />
<br />
=== Installing WinSCP ===<br />
WinSCP can be downloaded from https://winscp.net/ and installed by launching the downloaded WinSCP installer.<br />
<br />
=== Configuring WinSCP ===<br />
Under "New Session", make the following configurations:<br />
<ul><br />
<li>File protocol: Amazon S3</li><br />
<li>Host name: object-arbutus.cloud.computecanada.ca</li><br />
<li>Port number: 443</li><br />
<li>Access key ID: 20_DIGIT_ACCESS_KEY provided by the Arbutus team</li><br />
</ul><br />
and "Save" these settings as shown below<br />
<br />
[[File:WinSCP Configuration.png|600px|thumb|center|WinSCP configuration screen]]<br />
<br />
Next, click on the "Edit" button and then click on "Advanced..." and navigate to "Environment" to "S3" to "Protocol options" to "URL style:" which <b>must</b> changed from "Virtual Host" to "Path" as shown below:<br />
<br />
[[File:WinSCP Path Configuration.png|600px|thumb|center|WinSCP Path Configuration]]<br />
<br />
This "Path" setting is important, otherwise WinSCP will not work and you will see hostname resolution errors, like this:<br />
[[File:WinSCP resolve error.png|400px|thumb|center|WinSCP resolve error]]<br />
<br />
=== Using WinSCP ===<br />
Click on the "Login" button and use the WinSCP GUI to create buckets and to transfer files:<br />
<br />
[[File:WinSCP transfers.png|800px|thumb|center|WinSCP file transfer screen]]<br />
<br />
=== ACLs and Policies ===<br />
Unfortunately, as of version 5.19 WinSCP is not capable of managing object storage ACLs and Policies.<br />
<br />
[[Category:CC-Cloud]]</div>Dcgriffhttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=113957Arbutus object storage2022-04-12T16:38:54Z<p>Dcgriff: </p>
<hr />
<div>= Arbutus Object Storage =<br />
<br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
We offer access to the Object Store via two different protocols:<br />
<br />
# Swift<br />
# S3<br />
<br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
== Accessing and Managing Object Store ==<br />
<br />
When requesting access we will ask you for the following:<br />
<br />
You can interact with your Object Store using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers. In this context the two terms are interchangable. Please note that if you create a new container as "Public" any object placed within this container can be accessed (read-only) by anyone freely on the internet simply by navigating to the following URL with your container and object names inserted in place:<br />
<br />
https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE><br />
<br />
You can also use the Swift command line tool included with the Openstack command line clients.<br />
For instructions on how to install and operate the Openstack command line clients please refer<br />
to [https://docs.openstack.org/python-openstackclient/latest/ the Openstack documentation] as this falls outside of the scope of this document.<br />
<br />
If you wish to use the S3 protocol, you can generate your own S3 access and secret keys using the Openstack command line client:<br />
<br />
<code>openstack ec2 credentials create</code><br />
<br />
The tool "s3cmd" which is available in Linux is the preferred way to access our S3 gateway, however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
The users are responsible for operations inside of the 'tenant'. As such, the buckets and management of those buckets are up to the user. <br />
<br />
== Some General Information ==<br />
<br />
* Buckets are owned by the user that creates them, and no other users can manipulate them.<br />
* You can make a bucket world accessible which then gives you a URL to share that will serve content in the bucket.<br />
* Bucket and object names must be unique across _all_ users in the Object Store, so you may benefit by prefixing each bucket and object with your project name to maintain uniqueness.<br />
* Bucket policies are managed via json files.<br />
<br />
== Connection Details and s3cmd Configuration ==<br />
<br />
The object storage is accessible via an HTTP endpoint:<br />
<br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
The following is an example of a bare minimum s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
Using s3cmd's <code>--configuration</code> feature is [https://docs.computecanada.ca/wiki/Arbutus_Object_Storage_Clients#Configuring_s3cmd described here].<br />
<br />
== Example Bucket operations ==<br />
<br />
<ul><br />
<li><p>Making a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>Example bucket policy:</p><br />
<p>You need to first create a policy json file:</p><br />
<pre>&quot;testbucket.policy&quot;: <br />
{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Statement&quot;: [{<br />
&quot;Effect&quot;: &quot;Allow&quot;,<br />
&quot;Principal&quot;: {&quot;AWS&quot;: [<br />
&quot;arn:aws:iam::rrg_cjhuofw:user/parsa7&quot;,<br />
&quot;arn:aws:iam::rrg_cjhuofw:user/dilbar&quot;<br />
]},<br />
&quot;Action&quot;: [<br />
&quot;s3:ListBucket&quot;,<br />
&quot;s3:PutObject&quot;,<br />
&quot;s3:DeleteObject&quot;,<br />
&quot;s3:GetObject&quot;<br />
],<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket/*&quot;,<br />
&quot;arn:aws:s3:::testbucket&quot;<br />
]<br />
}]<br />
}<br />
</pre><br />
<p>This file allows you to set specific permissions for any number of users of that bucket.</p><br />
<p>You can even specify users from another tenant if there is a user from another project working with you.</p><br />
<p>Now that you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<p>More extensive examples and actions can be found here: https://www.linode.com/docs/platform/object-storage/how-to-use-object-storage-acls-and-bucket-policies/</p></li></ul><br />
<br />
[[Category:CC-Cloud]]</div>Dcgriffhttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=113389Arbutus object storage2022-03-29T15:39:10Z<p>Dcgriff: </p>
<hr />
<div>{{Draft}}<br />
<br />
= Arbutus Object storage =<br />
<br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
We offer access to the Object Store via two different protocols:<br />
<br />
# Swift<br />
# S3<br />
<br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
== Accessing and Managing Object Store ==<br />
<br />
When requesting access we will ask you for the following:<br />
<br />
You can interact with your Object Store using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers. In this context the two terms are interchangable. Please note that if you create a new container as "Public" any object placed within this container can be accessed (read-only) by anyone freely on the internet simply by navigating to the following URL with your container and object names inserted in place:<br />
<br />
https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE><br />
<br />
You can also use the Swift command line tool included with the Openstack command line clients.<br />
For instructions on how to install and operate the Openstack command line clients please refer<br />
to [https://docs.openstack.org/python-openstackclient/latest/ the Openstack documentation] as this falls outside of the scope of this document.<br />
<br />
If you wish to use the S3 protocol, you can generate your own S3 access and secret keys using the Openstack command line client:<br />
<br />
<code>openstack ec2 credentials create</code><br />
<br />
The tool "s3cmd" which is available in Linux is the preferred way to access our S3 gateway, however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
The users are responsible for operations inside of the 'tenant'. As such, the buckets and management of those buckets are up to the user. <br />
<br />
== Some General Information ==<br />
<br />
* Buckets are owned by the user that creates them, and no other users can manipulate them.<br />
* You can make a bucket world accessible which then gives you a URL to share that will serve content in the bucket.<br />
* Bucket and object names must be unique across _all_ users in the Object Store, so you may benefit by prefixing each bucket and object with your project name to maintain uniqueness.<br />
* Bucket policies are managed via json files.<br />
<br />
== Connection Details and s3cmd Configuration ==<br />
<br />
The object storage is accessible via an HTTP endpoint:<br />
<br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
The following is an example of a bare minimum s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
Using s3cmd's <code>--configuration</code> feature is [https://docs.computecanada.ca/wiki/Arbutus_Object_Storage_Clients#Configuring_s3cmd described here].<br />
<br />
== Example Bucket operations ==<br />
<br />
<ul><br />
<li><p>Making a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>Example bucket policy:</p><br />
<p>You need to first create a policy json file:</p><br />
<pre>&quot;testbucket.policy&quot;: <br />
{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Statement&quot;: [{<br />
&quot;Effect&quot;: &quot;Allow&quot;,<br />
&quot;Principal&quot;: {&quot;AWS&quot;: [<br />
&quot;arn:aws:iam::rrg_cjhuofw:user/parsa7&quot;,<br />
&quot;arn:aws:iam::rrg_cjhuofw:user/dilbar&quot;<br />
]},<br />
&quot;Action&quot;: [<br />
&quot;s3:ListBucket&quot;,<br />
&quot;s3:PutObject&quot;,<br />
&quot;s3:DeleteObject&quot;,<br />
&quot;s3:GetObject&quot;<br />
],<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket/*&quot;,<br />
&quot;arn:aws:s3:::testbucket&quot;<br />
]<br />
}]<br />
}<br />
</pre><br />
<p>This file allows you to set specific permissions for any number of users of that bucket.</p><br />
<p>You can even specify users from another tenant if there is a user from another project working with you.</p><br />
<p>Now that you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<p>More extensive examples and actions can be found here: https://www.linode.com/docs/platform/object-storage/how-to-use-object-storage-acls-and-bucket-policies/</p></li></ul><br />
<br />
[[Category:CC-Cloud]]</div>Dcgriffhttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=113388Arbutus object storage2022-03-29T15:05:25Z<p>Dcgriff: updated doc for new provisioning process</p>
<hr />
<div>{{Draft}}<br />
<br />
= Arbutus Object storage =<br />
<br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
We offer access to the Object Store via two different protocols:<br />
<br />
# Swift<br />
# S3<br />
<br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
== Accessing and Managing Object Store ==<br />
<br />
When requesting access we will ask you for the following:<br />
<br />
You can interact with your Object Store using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers. In this context the two terms are interchangable. Please note that if you create a new container as "Public" any object placed within this container can be accessed (read-only) by anyone freely on the internet simply by navigating to the following URL with your container and object names inserted in place:<br />
<br />
https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE><br />
<br />
You can also use the Swift command line tool included with the Openstack command line clients.<br />
For instructions on how to install and operate the Openstack command line clients please refer<br />
to [https://docs.openstack.org/python-openstackclient/latest/ the Openstack documentation] as this falls outside of the scope of this document.<br />
<br />
If you wish to use the S3 protocol, you can generate your own S3 access and secret keys using the Openstack command line client:<br />
<br />
<code>openstack ec2 credentials create</code><br />
<br />
The tool "s3cmd" which is available in Linux is the preferred way to access our S3 gateway, however there are other tools out there that will also work.<br />
<br />
The users are responsible for operations inside of the 'tenant'. As such, the buckets and management of those buckets are up to the user. <br />
<br />
== Some General Information ==<br />
<br />
* Buckets are owned by the user that creates them, and no other users can manipulate them.<br />
* You can make a bucket world accessible which then gives you a URL to share that will serve content in the bucket.<br />
* Bucket and object names must be unique across _all_ users in the Object Store, so you may benefit by prefixing each bucket and object with your project name to maintain uniqueness.<br />
* Bucket policies are managed via json files.<br />
<br />
== Connection Details and s3cmd Configuration ==<br />
<br />
The object storage is accessible via an HTTP endpoint:<br />
<br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
The following is an example of a bare minimum s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
Using s3cmd's <code>--configuration</code> feature is [https://docs.computecanada.ca/wiki/Arbutus_Object_Storage_Clients#Configuring_s3cmd described here].<br />
<br />
== Example Bucket operations ==<br />
<br />
<ul><br />
<li><p>Making a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>Example bucket policy:</p><br />
<p>You need to first create a policy json file:</p><br />
<pre>&quot;testbucket.policy&quot;: <br />
{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Statement&quot;: [{<br />
&quot;Effect&quot;: &quot;Allow&quot;,<br />
&quot;Principal&quot;: {&quot;AWS&quot;: [<br />
&quot;arn:aws:iam::rrg_cjhuofw:user/parsa7&quot;,<br />
&quot;arn:aws:iam::rrg_cjhuofw:user/dilbar&quot;<br />
]},<br />
&quot;Action&quot;: [<br />
&quot;s3:ListBucket&quot;,<br />
&quot;s3:PutObject&quot;,<br />
&quot;s3:DeleteObject&quot;,<br />
&quot;s3:GetObject&quot;<br />
],<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket/*&quot;,<br />
&quot;arn:aws:s3:::testbucket&quot;<br />
]<br />
}]<br />
}<br />
</pre><br />
<p>This file allows you to set specific permissions for any number of users of that bucket.</p><br />
<p>You can even specify users from another tenant if there is a user from another project working with you.</p><br />
<p>Now that you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<p>More extensive examples and actions can be found here: https://www.linode.com/docs/platform/object-storage/how-to-use-object-storage-acls-and-bucket-policies/</p></li></ul><br />
<br />
[[Category:CC-Cloud]]</div>Dcgriffhttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=112260Arbutus object storage2022-02-22T22:01:28Z<p>Dcgriff: </p>
<hr />
<div>{{Draft}}<br />
<br />
= Arbutus Object storage =<br />
<br />
Object storage at Arbutus can be requested via cloud@computecanada.ca.<br />
<br />
You can either apply for a RAS allocation or a RAC allocation.<br />
<br />
We offer access to the Object Store via three different protocols:<br />
<br />
# S3<br />
# Swift<br />
# Radosgw<br />
<br />
== Access Request Information ==<br />
<br />
When requesting access we will ask you for the following:<br />
<br />
* Project code (e.g. rrg_piUserName)<br />
* CC username of the user(s) to add<br />
* Actual name of user(s) (e.g. Mike Cave)<br />
* Expiry date of user(s) - This is used if the user is a temp member of the group (e.g. grad student, lab assistant, temporary group member)<br />
* Permission type (read, write, both), per user<br />
* Do you need a Swift key?<br />
<br />
Once we have the basic users and account setup on the object storage service, we will let you know how to collect the keys for access.<br />
<br />
== Bucket Management ==<br />
<br />
Admins will create the project for new requests and users are generated for the project at the time.<br />
<br />
The users are responsible for operations inside of the 'tenant'. As such, the buckets and management of those buckets are up to the user.<br />
<br />
The tool &quot;s3cmd&quot; which is available in Linux is the perferred way to access our S3 gateway, however ther are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
=== Some General Information ===<br />
<br />
* Buckets are owned to the user that creates them and no other users can manipulate them<br />
* You can make a bucket world accessible which then gives you a URL to post that will serve content in the bucket<br />
* Bucket policies are managed via json files<br />
<br />
=== Connection Details and s3cmd Configuration ===<br />
<br />
The object storage is accessible via an HTTP endpoint:<br />
<br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
The following is an example of a bare minimum s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca:443<br />
host_bucket = object-arbutus.cloud.computecanada.ca:443<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
=== Example Bucket operations ===<br />
<br />
<ul><br />
<li><p>Making a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>Example bucket policy:</p><br />
<p>You need to first create a policy json file:</p><br />
<pre>&quot;testbucket.policy&quot;: <br />
{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Statement&quot;: [{<br />
&quot;Effect&quot;: &quot;Allow&quot;,<br />
&quot;Principal&quot;: {&quot;AWS&quot;: [<br />
&quot;arn:aws:iam::rrg_cjhuofw:user/parsa7&quot;,<br />
&quot;arn:aws:iam::rrg_cjhuofw:user/dilbar&quot;<br />
]},<br />
&quot;Action&quot;: [<br />
&quot;s3:ListBucket&quot;,<br />
&quot;s3:PutObject&quot;,<br />
&quot;s3:DeleteObject&quot;,<br />
&quot;s3:GetObject&quot;<br />
],<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket/*&quot;,<br />
&quot;arn:aws:s3:::testbucket&quot;<br />
]<br />
}]<br />
}<br />
</pre><br />
<p>This file allows you to set specific permissions for any number of users of that bucket.</p><br />
<p>You can even specify users from another tenant if there is a user from another project working with you.</p><br />
<p>Now that you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<p>More extensive examples and actions can be found here: https://www.linode.com/docs/platform/object-storage/how-to-use-object-storage-acls-and-bucket-policies/</p></li></ul><br />
<br />
[[Category:CC-Cloud]]</div>Dcgriffhttps://docs.alliancecan.ca/mediawiki/index.php?title=CephFS&diff=105353CephFS2021-10-28T16:41:21Z<p>Dcgriff: updated doc version to nautilus</p>
<hr />
<div>{{Draft}}<br />
<br />
= User Guide to Provisioning and Deploying CephFS =<br />
<br />
== Introduction ==<br />
<br />
CephFS provides a common filesystem that can be shared amongst multiple openstack vm hosts. Access to the service is granted via requests to cloud@computecanada.ca.<br />
<br />
This is a fairly technical procedure that assumes basic Linux skills for creating/editing files, setting permissions and creating mount points. Contact your technical resource for your project for assistance in setting up this service.<br />
<br />
== Technical Procedure ==<br />
<br />
=== Request Access to Shares ===<br />
<br />
* If you do not already have a quota for the service you will need to request this through cloud@computecanada.ca.<br />
** In your request please provide the following:<br />
*** OpenStack Project name<br />
*** Amount of quota required in GB.<br />
*** If more than one share is required, how many are required?<br />
<br />
=== Create Share ===<br />
<br />
# Create a share in &quot;Shares&quot; under the &quot;Share&quot; menu:<br />
#* Give this a name that identifies your project: project-name-shareName<br />
#** e.g. def-project-shareName<br />
#* Share Protocol = cephfs<br />
#* Size = size you need for this share<br />
#* Share Type = cephfs<br />
#* Availability Zone = nova<br />
#* Do not check &quot;Make visible for all&quot;, otherwise the share would be accessible by anyone in every project.<br />
# Create an Access Rule which generates an Access Key<br />
#* On the &quot;Shares&quot; pane, click on the drop down menu under &quot;Actions&quot; and select &quot;Manage Rules&quot;.<br />
#* Create a new rule using the &quot;+Add Rule&quot; button.<br />
#** Access Type = cephx<br />
#** Select &quot;read-write&quot; or &quot;read-only&quot; under &quot;Access Level&quot;. You can create multiple rules for either access level if required.<br />
#** Choose a key name in the &quot;Access To&quot; field that describes the key (e.g. def-project-shareName-read-write).<br />
# Note the Share details:<br />
#* Click on the share.<br />
#* Under &quot;Overview&quot;, note the &quot;Path&quot; which you will need later.<br />
#* Under &quot;Access Control&quot;, note the &quot;Access Key&quot; which you will need later.<br />
#* Access Keys are approximately 40 characters and end with an &quot;=&quot; sign.<br />
#* If you do not see an Access Key, you probably didn't add an Access Rule with an Access Type = cephx<br />
<br />
=== Configure Host ===<br />
<br />
<ol><br />
<li><p>Install required packages</p><br />
<ul><br />
<li><p>Red Hat Family (RHEL, CentOS, Fedora, Scientific Linux, SUSE, etc.):</p><br />
<ol><br />
<li>Install relevant repos for access to ceph client packages:<br />
<pre>ceph-stable (nautilus is current as of this writting)<br />
https://docs.ceph.com/en/nautilus/install/get-packages/<br />
epel (sudo yum install epel-release)<br />
<br />
</pre></li><br />
<li>Install packages to enable the ceph client on all the VMs you plan on mounting the share:<br />
<pre>libcephfs2<br />
python-cephfs<br />
ceph-common<br />
python-ceph-argparse<br />
ceph-fuse (only if you intend a fuse mount)<br />
</pre></li></ol><br />
<br />
<ul><br />
<li>Debian Family (Debian, Ubuntu, Mint, etc.):</li></ul><br />
<br />
<pre> https://docs.ceph.com/en/nautilus/install/get-packages/<br />
</pre></li></ul><br />
</li><br />
<li><p>Configure Keys:</p><br />
<ul><br />
<li><p>Create two files in your VM each containing the &quot;Access Key&quot;. This key can be found in the rule definition, or in the &quot;Access Rules&quot; section of your share definition.</p></li><br />
<li><p>File 1: /etc/ceph/client.fullkey.shareName (e.g. client.fullkey.def-project-shareName-read-write)</p><br />
<ul><br />
<li>contents:<br />
<pre>[client.shareName]<br />
key = AccessKey<br />
</pre></li></ul><br />
</li><br />
<li><p>File 2: /etc/ceph/client.keyonly.shareName (e.g client.keyonly.def-project-shareName-read-write)</p><br />
<ul><br />
<li>contents:<br />
<pre>AccessKey<br />
</pre></li><br />
<li>This file only contains the Access Key</li></ul><br />
</li><br />
<li><p>Own these files correctly to protect the key information:</p><br />
<ul><br />
<li>Each file should be own to root</li></ul><br />
<br />
<pre>sudo chown root.root filename<br />
</pre><br />
<ul><br />
<li>Each file should be only readable by root</li></ul><br />
<br />
<pre>sudo chmod 600 filename<br />
</pre></li></ul><br />
</li><br />
<li><p>Create <code>/etc/ceph/ceph.conf</code> with contents:</p><br />
<pre>[client]<br />
client quota = true<br />
mon host = 10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789<br />
</pre><br />
<ul><br />
<li>Note: these are the monitors for the Arbutus cluster - if connecting to a different cluster you will need the monitor information specific to that cluster.<br />
<ul><br />
<li>You can find the monitor information in the Share Details for your share in the &quot;Path&quot; field.</li></ul><br />
</li></ul><br />
</li><br />
<li><p>Retrieve the connection information from the share page for your connection:</p><br />
<ul><br />
<li>Open up the share details by clicking the name of the share in the Shares page.</li><br />
<li>Copy the entire path of the share for mounting the filesystem.</li></ul><br />
</li><br />
<li><p>Mount the filesystem</p><br />
<ul><br />
<li>Create mount point directory somewhere in your host (likely under /mnt/ - e.g. /mnt/ShareName)</li><br />
<li>Via kernel mount using the ceph driver:<br />
<ul><br />
<li>Syntax: <code>sudo mount -t ceph &lt;path information&gt; &lt;mountPoint&gt; -o name=&lt;shareKeyName&gt;, secretfile=&lt;/path/to/keyringfileOnlyFile&gt;</code></li><br />
<li><code>sudo mount -t ceph mon1:6789,mon2:6789,mon3:6789:/volumes/_nogroup/share_instance_id </code><br />
<ul><br />
<li>e.g <code>sudo mount -t ceph 192.168.17.13:6789,192.168.17.14:6789,192.168.17.15:6789:/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7 /mnt/WebServerShare -o name=def-project-shareName-read-write,secretfile=/etc/ceph/client.keyonly.def-project-sharename-read-write</code></li></ul><br />
</li></ul><br />
</li><br />
<li>Via ceph-fuse<br />
<ul><br />
<li>Need to install ceph-fuse</li><br />
<li>Syntax: <code>sudo ceph-fuse &lt;mountPoint&gt; --id=&lt;shareKeyName&gt; --conf=&lt;pathtoCeph.conf&gt; --keyring=&lt;fullKeyringLocation&gt; --client-mountpoint=pathFromShareDetails</code><br />
<ul><br />
<li>e.g. <code>sudo ceph-fuse /mnt/WebServerShare --id=def-project-shareName-read-write --conf=/etc/ceph/ceph.conf --keyring=/etc/ceph/client.fullkey.def-project-shareName-read-write --client-mountpoint=/volumes/_nogroup/a87b5ef3-b266-4664-a5ed-026cddfdcdb7</code></li></ul><br />
</li></ul><br />
</li></ul><br />
</li></ol><br />
<br />
== Notes ==<br />
<br />
* A particular share can have more than one user key provisioned for it.<br />
** This allows a more granular access to the filesystem.<br />
** For example, if you needed some hosts to only access the filesystem in a read only capacity.<br />
** If you have multiple keys for a share you can add the extra keys to your host and modify the above mounting procedure.<br />
* This service is not available to hosts outside of the Openstack cluster.</div>Dcgriffhttps://docs.alliancecan.ca/mediawiki/index.php?title=Cloud_RAS_Allocations&diff=100001Cloud RAS Allocations2021-04-28T16:16:34Z<p>Dcgriff: Added object storage and shared filesystem values</p>
<hr />
<div><languages /><br />
<br />
<translate><br />
<!--T:10--><br />
''Parent page: [[Cloud]]''<br />
<br />
<!--T:1--><br />
Any Compute Canada user can access modest quantities of compute, storage and cloud resources as soon as they have a Compute Canada account. Rapid Access Service ('''RAS''') allows users to experiment and to start working right away. Many research groups can meet their needs through using the Rapid Access Service only. Users requiring larger resource quantities can apply to one of our annual [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ Resource Allocation Competitions] ('''RAC'''). Primary Investigators (PIs) with a current RAC allocation are still able to request resources via RAS.<br />
<br />
<!--T:11--><br />
Using cloud resources such as storage, compute and network, researchers can create '''''cloud instances''''' (also known as Virtual machines or VMs). There are two options available for Compute Canada cloud resources:<br />
* '''Compute cloud''': These are instances that have a '''limited life-time''' (wall-time) and typically have '''constant high-CPU''' requirements. They are sometimes referred to as ‘batch’ instances. Users may need a large number of compute instances for production activities. Maximum wall-time for compute instances is '''one month'''. Upon reaching their life-time limit these instances will be scheduled for deactivation and their owners will be notified in order to ensure they clean up their instances and download any required data. Any grace period is subject to resources availability at that time.<br />
* '''Persistent cloud''': These are instances that are meant to run '''indefinitely''' and would include '''web servers''', '''database servers''', etc. In general, these instances provide a persistent service and use '''less CPU''' power than compute instances.<br />
<br />
<!--T:12--><br />
Cloud RAS resources limits<br />
<br />
<!--T:3--><br />
{| class="wikitable"<br />
|-<br />
! Attributes !! Compute Cloud<ref name="both-renewal">Users may request both a compute and persistent allocation to share a single project. Storage is shared between the two allocations and is limited to 10TB/PI in total. PI’s may request a 1-year renewal of their cloud RAS allocations an unlimited number of times; however, allocations will be given based on available resources and are not guaranteed. Requests made after January 1 will expire March of the following year and therefore may be longer than 1 year. Allocation requests made between May-December will be less than 1 year. Renewals will take effect in April.</ref> !! Persistent Cloud<ref name="both-renewal"/><br />
|-<br />
| Who can request || PIs only || PIs only<br />
|-<br />
| VCPUs (see [[Virtual_machine_flavors|VM flavours]]) || 80 || 25<br />
|-<br />
| Instances<ref name="softquota">This is a metadata quota and not a hard limit, users can request an increase beyond these values without a RAC request.</ref> || 20 || 10<br />
|-<br />
| Volumes<ref name="softquota"/> || 2 || 10<br />
|-<br />
| Volume snapshots<ref name="softquota"/> || 2 || 10<br />
|-<br />
| RAM (GB) || 300 || 50<br />
|-<br />
| Floating IP || 2 || 2<br />
|-<br />
| Persistent storage (GB) <br />
|colspan="2" align="center" | 10000<br />
|-<br />
| Object storage (GB) <br />
|colspan="2" align="center" | 10000<br />
|-<br />
| Shared filesystem storage (GB) <br />
|colspan="2" align="center" | 10000<br />
|-<br />
| Default duration || 1 year<ref name="renwal">This is to align with the RAC allocation period of April-March.</ref>, with 1 month wall-time || 1 year (renewable)<ref name="renwal"/><br />
|-<br />
| Default renewal || April<ref name="renwal"/> || April<ref name="renwal"/><br />
|}<br />
<br />
<!--T:2--><br />
To request RAS, please fill out [https://docs.google.com/forms/d/e/1FAIpQLSeU_BoRk5cEz3AvVLf3e9yZJq-OvcFCQ-mg7p4AWXmUkd5rTw/viewform this form] online.<br />
<br />
<br />
<br />
<!--T:13--><br />
<small><br />
<br />
==Notes== <!--T:14--><br />
<references/><br />
</small> <br />
</translate></div>Dcgriffhttps://docs.alliancecan.ca/mediawiki/index.php?title=National_systems&diff=98516National systems2021-03-31T18:58:42Z<p>Dcgriff: </p>
<hr />
<div>{{Outdated}}<br />
<br />
<languages /><br />
<br />
<translate><br />
<br />
==Compute== <!--T:1--><br />
<br />
===Overview=== <!--T:2--><br />
<br />
<!--T:3--><br />
Cedar, Graham and Béluga are similar systems with some differences in interconnect and the number of large memory, small memory and GPU nodes.<br />
<br />
<!--T:4--><br />
{| class="wikitable"<br />
|-<br />
! Name !! Description !! Capacity !! Status<br />
|-<br />
| [[CC-Cloud Resources|Arbutus Cloud]] ||<br />
IaaS Cloud<br />
* Compute intensive and persistent workloads<br />
* vGPU nodes<br />
|| 44,112 virtual cores || In production<br />
|-<br />
| [[Béluga/en|Béluga]] ||<br />
heterogeneous, general-purpose cluster<br />
* Serial and small parallel jobs<br />
* GPU and big memory nodes<br />
|| 34,880 cores || In production<br />
|-<br />
| [[CC-Cloud Resources|Béluga Cloud]] ||<br />
IaaS Cloud<br />
* Compute intensive and persistent workloads<br />
|| 12,288 virtual cores || In production<br />
|-<br />
| [[Cedar|Cedar]] ||<br />
heterogeneous, general-purpose cluster<br />
* Serial and small parallel jobs<br />
* GPU and big memory nodes<br />
|| 94,528 cores || In production<br />
|-<br />
| [[CC-Cloud Resources|Cedar Cloud]] ||<br />
IaaS Cloud<br />
* Compute intensive and persistent workloads<br />
|| 4,352 virtual cores || In production<br />
|-<br />
| [[Graham|Graham]] ||<br />
heterogeneous, general-purpose cluster<br />
* Serial and small parallel jobs<br />
* GPU and big memory nodes<br />
|| 41,548 cores || In production<br />
|-<br />
| [[CC-Cloud Resources|Graham Cloud]] ||<br />
IaaS Cloud<br />
* Compute intensive and persistent workloads<br />
|| 11,232 virtual cores || In production<br />
|-<br />
| [[Niagara|Niagara]] ||<br />
homogeneous, large parallel cluster<br />
* Designed for large parallel jobs > 1000 cores<br />
|| 80,960 cores || In production<br />
|}<br />
<br />
<!--T:5--><br />
All systems have large, high-performance attached storage; see the relevant cluster page for more details.<br />
<br />
==CCDB descriptions== <!--T:9--><br />
<br />
<!--T:10--><br />
General descriptions are also available on CCDB:<br />
* [https://ccdb.computecanada.ca/resources/beluga-compute Béluga-Compute]<br />
* [https://ccdb.computecanada.ca/resources/beluga-gpu Béluga-GPU]<br />
* [https://ccdb.computecanada.ca/resources/Cedar-Compute Cedar-Compute]<br />
* [https://ccdb.computecanada.ca/resources/Cedar-GPU Cedar-GPU] <br />
* [https://ccdb.computecanada.ca/resources/Graham-Compute Graham-Compute]<br />
* [https://ccdb.computecanada.ca/resources/Graham-GPU Graham-GPU]<br />
* [https://ccdb.computecanada.ca/resources/ndc-calculquebec NDC-Calcul Québec]<br />
* [https://ccdb.computecanada.ca/resources/NDC-SFU NDC-SFU]<br />
* [https://ccdb.computecanada.ca/resources/NDC-Waterloo NDC-Waterloo]<br />
<br />
</translate><br />
<br />
[[Category:Migration2016]]</div>Dcgriffhttps://docs.alliancecan.ca/mediawiki/index.php?title=National_systems&diff=98506National systems2021-03-31T17:59:30Z<p>Dcgriff: Updating cloud values and splitting clouds into their own entry</p>
<hr />
<div>{{Outdated}}<br />
<br />
<languages /><br />
<br />
<translate><br />
<br />
==Compute== <!--T:1--><br />
<br />
===Overview=== <!--T:2--><br />
<br />
<!--T:3--><br />
Cedar, Graham and Béluga are similar systems with some differences in interconnect and the number of large memory, small memory and GPU nodes.<br />
<br />
<!--T:4--><br />
{| class="wikitable"<br />
|-<br />
! Name !! Description !! Capacity !! Status<br />
|-<br />
| [[CC-Cloud Resources|Arbutus Cloud]] ||<br />
IaaS Cloud<br />
* Compute intensive and persistent workloads<br />
* vGPU nodes<br />
|| 41,920 virtual cores || In production<br />
|-<br />
| [[Béluga/en|Béluga]] ||<br />
heterogeneous, general-purpose cluster<br />
* Serial and small parallel jobs<br />
* GPU and big memory nodes<br />
|| 34,880 cores || In production<br />
|-<br />
| [[CC-Cloud Resources|Béluga Cloud]] ||<br />
IaaS Cloud<br />
* Compute intensive and persistent workloads<br />
|| 12,288 virtual cores || In production<br />
|-<br />
| [[Cedar|Cedar]] ||<br />
heterogeneous, general-purpose cluster<br />
* Serial and small parallel jobs<br />
* GPU and big memory nodes<br />
|| 94,528 cores || In production<br />
|-<br />
| [[CC-Cloud Resources|Cedar Cloud]] ||<br />
IaaS Cloud<br />
* Compute intensive and persistent workloads<br />
|| 4,352 virtual cores || In production<br />
|-<br />
| [[Graham|Graham]] ||<br />
heterogeneous, general-purpose cluster<br />
* Serial and small parallel jobs<br />
* GPU and big memory nodes<br />
|| 41,548 cores || In production<br />
|-<br />
| [[CC-Cloud Resources|Graham Cloud]] ||<br />
IaaS Cloud<br />
* Compute intensive and persistent workloads<br />
|| 11,232 virtual cores || In production<br />
|-<br />
| [[Niagara|Niagara]] ||<br />
homogeneous, large parallel cluster<br />
* Designed for large parallel jobs > 1000 cores<br />
|| 80,960 cores || In production<br />
|}<br />
<br />
<!--T:5--><br />
All systems have large, high-performance attached storage; see the relevant cluster page for more details.<br />
<br />
==CCDB descriptions== <!--T:9--><br />
<br />
<!--T:10--><br />
General descriptions are also available on CCDB:<br />
* [https://ccdb.computecanada.ca/resources/beluga-compute Béluga-Compute]<br />
* [https://ccdb.computecanada.ca/resources/beluga-gpu Béluga-GPU]<br />
* [https://ccdb.computecanada.ca/resources/Cedar-Compute Cedar-Compute]<br />
* [https://ccdb.computecanada.ca/resources/Cedar-GPU Cedar-GPU] <br />
* [https://ccdb.computecanada.ca/resources/Graham-Compute Graham-Compute]<br />
* [https://ccdb.computecanada.ca/resources/Graham-GPU Graham-GPU]<br />
* [https://ccdb.computecanada.ca/resources/ndc-calculquebec NDC-Calcul Québec]<br />
* [https://ccdb.computecanada.ca/resources/NDC-SFU NDC-SFU]<br />
* [https://ccdb.computecanada.ca/resources/NDC-Waterloo NDC-Waterloo]<br />
<br />
</translate><br />
<br />
[[Category:Migration2016]]</div>Dcgriff