https://docs.alliancecan.ca/mediawiki/api.php?action=feedcontributions&user=Rmc&feedformat=atomAlliance Doc - User contributions [en]2024-03-28T09:54:14ZUser contributionsMediaWiki 1.39.6https://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=150980Arbutus object storage2024-03-08T17:09:04Z<p>Rmc: Undo revision 150979 by Mihow (talk)</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or <i>object</i> is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, management of a project's object storage containers is self-service. This includes operations such as [[Backing up your VM|backups]] because the object store itself is not backed up. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user account for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Establishing access to your Arbutus Object Store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object Store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
= Accessing your Arbutus Object Store = <!--T:35--><br />
Setting access policies cannot be done via a web browser but must be done with a [[Arbutus object storage clients|SWIFT or S3-compatible client]]. There are two ways to access your data containers:<br />
<br />
<!--T:21--><br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd).<br />
# if your object storage policies are set to public (not default), object storage is accessible using a browser via an HTTPS endpoint:<br />
<code>https://object-arbutus.cloud.computecanada.ca:443/DATA_CONTAINER/FILENAME</code><br />
<br />
= Managing your Arbutus Object Store = <!--T:36--><br />
<br />
<!--T:15--><br />
The recommended way to manage buckets and objects in the Arbutus Object Store is by using the <code>s3cmd</code> tool, which is available in Linux.<br />
Our documentation provides specific instructions on [[Accessing_object_storage_with_s3cmd|configuring and managing access]] with the <code>s3cmd</code> client.<br />
We can also use other [[Arbutus object storage clients|S3-compatible clients]] that are also compatible with Arbutus Object Store.<br />
<br />
<!--T:10--><br />
In addition, we can perform certain management tasks for our object storage using the [https://arbutus.cloud.computecanada.ca/project/containers Containers] section under the <b>Object Store</b> tab in the [https://arbutus.cloud.computecanada.ca Arbutus OpenStack Dashboard].<br />
<br />
<!--T:37--><br />
This interface refers to <i>data containers</i>, which are also known as <i>buckets</i> in other object storage systems.<br />
<br />
<!--T:38--><br />
Using the dashboard, we can create new data containers, upload files, and create directories. Alternatively, we can also create data containers using [[Arbutus object storage clients|S3-compatible clients]].<br />
<br />
<!--T:39--><br />
{{quote|Please note that data containers are owned by the user who creates them and cannot be manipulated by others.<br/>Therefore, you are responsible for managing your data containers and their contents within your cloud project.}}<br />
<br />
<!--T:40--><br />
If you create a new container as <b>Public</b>, anyone on the internet can read its contents by simply navigating to <br />
<br />
<!--T:41--><br />
<code><br />
<nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki><br />
</code><br />
<br />
<!--T:42--><br />
with your container and object names inserted in place.<br />
<br />
<!--T:43--><br />
{{quote|It's important to keep in mind that each data container on the <b>Arbutus Object Store</b> must have a <b>unique name across all users</b>. To ensure uniqueness, we may want to prefix our data container names with our project name to avoid conflicts with other users. One useful rule of thumb is to refrain from using generic names like <code>test</code> for data containers. Instead, consider using more specific and unique names like <code>def-myname-test</code>.}}<br />
<br />
<!--T:44--><br />
To make a data container accessible to the public, we can change its policy to allow public access. This can come in handy if we need to share files to a wider audience. We can manage container policies using JSON files, allowing us to specify various access controls for our containers and objects.<br />
<br />
== Managing data container (bucket) policies for your Arbutus Object Store == <!--T:31--><br />
<br/><br />
{{Warning|title=Attention|content=Be careful with policies because an ill-conceived policy can lock you out of your data container.}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only supports a [[Arbutus_object_storage#Policy_subset|subset]] of the AWS specification for [https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html data container polices]. The following example shows how to create, apply, and view a policy. The first step is to create a policy json file:<br />
<br />
<!--T:45--><br />
<syntaxhighlight lang="json"><br />
{<br />
"Version": "2012-10-17",<br />
"Id": "S3PolicyId1",<br />
"Statement": [<br />
{<br />
"Sid": "IPAllow",<br />
"Effect": "Deny",<br />
"Principal": "*",<br />
"Action": "s3:*",<br />
"Resource": [<br />
"arn:aws:s3:::testbucket",<br />
"arn:aws:s3:::testbucket/*"<br />
],<br />
"Condition": {<br />
"NotIpAddress": {<br />
"aws:SourceIp": "206.12.0.0/16",<br />
"aws:SourceIp": "142.104.0.0/16"<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</syntaxhighlight><br />
<br />
<!--T:46--><br />
This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.<br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the data container:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
== Policy subset == <!--T:47--><br />
<br />
<!--T:48--><br />
Currently, we support only the following actions:<br />
<br />
<!--T:49--><br />
* s3:AbortMultipartUpload<br />
* s3:CreateBucket<br />
* s3:DeleteBucketPolicy<br />
* s3:DeleteBucket<br />
* s3:DeleteBucketWebsite<br />
* s3:DeleteObject<br />
* s3:DeleteObjectVersion<br />
* s3:DeleteReplicationConfiguration<br />
* s3:GetAccelerateConfiguration<br />
* s3:GetBucketAcl<br />
* s3:GetBucketCORS<br />
* s3:GetBucketLocation<br />
* s3:GetBucketLogging<br />
* s3:GetBucketNotification<br />
* s3:GetBucketPolicy<br />
* s3:GetBucketRequestPayment<br />
* s3:GetBucketTagging<br />
* s3:GetBucketVersioning<br />
* s3:GetBucketWebsite<br />
* s3:GetLifecycleConfiguration<br />
* s3:GetObjectAcl<br />
* s3:GetObject<br />
* s3:GetObjectTorrent<br />
* s3:GetObjectVersionAcl<br />
* s3:GetObjectVersion<br />
* s3:GetObjectVersionTorrent<br />
* s3:GetReplicationConfiguration<br />
* s3:IPAddress<br />
* s3:NotIpAddress<br />
* s3:ListAllMyBuckets<br />
* s3:ListBucketMultipartUploads<br />
* s3:ListBucket<br />
* s3:ListBucketVersions<br />
* s3:ListMultipartUploadParts<br />
* s3:PutAccelerateConfiguration<br />
* s3:PutBucketAcl<br />
* s3:PutBucketCORS<br />
* s3:PutBucketLogging<br />
* s3:PutBucketNotification<br />
* s3:PutBucketPolicy<br />
* s3:PutBucketRequestPayment<br />
* s3:PutBucketTagging<br />
* s3:PutBucketVersioning<br />
* s3:PutBucketWebsite<br />
* s3:PutLifecycleConfiguration<br />
* s3:PutObjectAcl<br />
* s3:PutObject<br />
* s3:PutObjectVersionAcl<br />
* s3:PutReplicationConfiguration<br />
* s3:RestoreObject<br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Cloud_storage_options&diff=144767Cloud storage options2023-09-29T18:03:34Z<p>Rmc: Added multi-disk fault tolerance</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
<!--T:1--><br />
The existing storage types available in our clouds are:<br />
<br />
<!--T:2--><br />
* '''[[Working_with_volumes | Volume storage]]''': The standard storage unit for cloud computing; can be attached to and detached from an instance. <br />
* '''Ephemeral/Disk storage''': Virtual local disk storage tied to the lifecycle of a single instance.<br />
* '''[[Arbutus object storage | Object storage]]''': Non-hierarchical storage where data is created or uploaded in whole-file form<br />
* '''[[Arbutus_CephFS | Shared filesystem storage]]''': Storage in the cloud shared filesystem; must be configured on each instance where it is mounted.<br />
<br />
<!--T:3--><br />
Attributes of each storage type are compared in the following table:<br />
<br />
<!--T:4--><br />
{| class="wikitable sortable"<br />
! Attribute !! Volume storage !! Ephemeral/Disk storage !! Object storage !! Shared filesystem storage <br />
|-<br />
| Default storage option || Yes || Yes || No || No<br />
|-<br />
| Can be accessed via Web browser || No || No || Yes || No <br />
|-<br />
| Access can be restricted for specific source IP ranges || Yes || Yes || Yes (S3 ACL) || Yes <br />
|-<br />
| Can be mounted on a single VM || Yes || Yes || No || Yes <br />
|-<br />
| Can be mounted on multiple VMs (and across projects) simultaneously || No || No || No || Yes <br />
|-<br />
| Automatic backups || No (manually with snapshots) || No || No || Yes (nightly to tape)<br />
|-<br />
| Suitable for write once, read only, and public access || No || No || Yes || No <br />
|-<br />
| Suitable for data/files that change frequently || Yes || Yes || No || Yes<br />
|-<br />
| Hierarchical filesystem || Yes || Yes || No || Yes <br />
|-<br />
| Suitable for long-term storage || Yes || No || Yes || No <br />
|-<br />
| Deleted automatically upon deletion of VM || No || Yes || No || No <br />
|- <br />
| Standard magnitude of allocation || GB || GB || TB || TB <br />
|- <br />
| multi-disk fault tolerance || Yes || No (c-flavors No, p-flavors Yes) || Yes || Yes <br />
|- <br />
| Physical disk-level encryption || No || No || No || No <br />
|- <br />
|}<br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Cloud_storage_options&diff=144758Cloud storage options2023-09-29T17:29:03Z<p>Rmc: Added no physical disk-level encryption asked in Ticket#0207641</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
<!--T:1--><br />
The existing storage types available in our clouds are:<br />
<br />
<!--T:2--><br />
* '''[[Working_with_volumes | Volume storage]]''': The standard storage unit for cloud computing; can be attached to and detached from an instance. <br />
* '''Ephemeral/Disk storage''': Virtual local disk storage tied to the lifecycle of a single instance.<br />
* '''[[Arbutus object storage | Object storage]]''': Non-hierarchical storage where data is created or uploaded in whole-file form<br />
* '''[[Arbutus_CephFS | Shared filesystem storage]]''': Storage in the cloud shared filesystem; must be configured on each instance where it is mounted.<br />
<br />
<!--T:3--><br />
Attributes of each storage type are compared in the following table:<br />
<br />
<!--T:4--><br />
{| class="wikitable sortable"<br />
! Attribute !! Volume storage !! Ephemeral/Disk storage !! Object storage !! Shared filesystem storage <br />
|-<br />
| Default storage option || Yes || Yes || No || No<br />
|-<br />
| Can be accessed via Web browser || No || No || Yes || No <br />
|-<br />
| Access can be restricted for specific source IP ranges || Yes || Yes || Yes (S3 ACL) || Yes <br />
|-<br />
| Can be mounted on a single VM || Yes || Yes || No || Yes <br />
|-<br />
| Can be mounted on multiple VMs (and across projects) simultaneously || No || No || No || Yes <br />
|-<br />
| Automatic backups || No (manually with snapshots) || No || No || Yes (nightly to tape)<br />
|-<br />
| Suitable for write once, read only, and public access || No || No || Yes || No <br />
|-<br />
| Suitable for data/files that change frequently || Yes || Yes || No || Yes<br />
|-<br />
| Hierarchical filesystem || Yes || Yes || No || Yes <br />
|-<br />
| Suitable for long-term storage || Yes || No || Yes || No <br />
|-<br />
| Deleted automatically upon deletion of VM || No || Yes || No || No <br />
|- <br />
| Standard magnitude of allocation || GB || GB || TB || TB <br />
|- <br />
| Physical disk-level encryption || No || No || No || No <br />
|- <br />
|}<br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=131776Arbutus object storage2023-03-15T16:11:22Z<p>Rmc: clarifed the language around notifying researchers that the object store isn't backed up</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, management of a project's object storage containers is self-service. This includes operations such as [[Backing up your VM|backups]] because the object store itself is not backed up. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Establishing access to your Arbutus Object Store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object Store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
= Accessing your Arbutus Object Store = <!--T:35--><br />
Setting access policies cannot be done via web browser but must be done with a [[Arbutus object storage clients|SWIFT or S3-compatible client]]. There are two ways to access your data containers:<br />
<br />
<!--T:21--><br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd).<br />
# if your object storage policies are set to public (not default), object storage is accessible using a browser via an HTTPS endpoint:<br />
<code>https://object-arbutus.cloud.computecanada.ca:443/DATA_CONTAINER/FILENAME</code><br />
<br />
= Managing your Arbutus object store = <!--T:36--><br />
<br />
<!--T:15--><br />
The recommended way to manage buckets and objects in the Arbutus Object Store is by using the <code>s3cmd</code> tool, which is available in Linux.<br />
Our documentation provides specific instructions on [[Accessing_object_storage_with_s3cmd|configuring and managing access]] with the <code>s3cmd</code> client<br />
We can also use other [[Arbutus object storage clients|S3-compatible clients]] that are also compatible with Arbutus Object Store.<br />
<br />
<!--T:10--><br />
In addition, we can perform certain management tasks for our object storage using the [https://arbutus.cloud.computecanada.ca/project/containers Containers] section under the '''Object Store''' tab in the [https://arbutus.cloud.computecanada.ca Arbutus OpenStack Dashboard].<br />
<br />
<!--T:37--><br />
This interface refers to "data containers", which are also known as "buckets" in other object storage systems.<br />
<br />
<!--T:38--><br />
Using the dashboard, we can create new data containers, upload files, and create directories. Alternatively, we can also create data containers using [[Arbutus object storage clients|S3-compatible clients]].<br />
<br />
<!--T:39--><br />
{{quote|Please note that data containers are owned by the user who creates them and cannot be manipulated by others.<br/>Therefore, you are responsible for managing your data containers and their contents within your cloud project.}}<br />
<br />
<!--T:40--><br />
If you create a new container as '''Public''', anyone on the Internet can read its contents by simply navigating to <br />
<br />
<!--T:41--><br />
<code><br />
<nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki><br />
</code><br />
<br />
<!--T:42--><br />
with your container and object names inserted in place.<br />
<br />
<!--T:43--><br />
{{quote|It's important to keep in mind that each data container on the '''Arbutus Object Store''' must have a '''unique name across all users'''. To ensure uniqueness, we may want to prefix our data container names with our project name to avoid conflicts with other users. One useful rule of thumb is to refrain from using generic names like <code>test</code> for data containers. Instead, consider using more specific and unique names like <code>def-myname-test</code>.}}<br />
<br />
<!--T:44--><br />
To make a data container accessible to the public, we can change its policy to allow public access. This can come in handy if we need to share files to a wider audience. We can manage container policies using JSON files, allowing us to specify various access controls for our containers and objects.<br />
<br />
== Managing data container (bucket) policies for your Arbutus Object Store == <!--T:31--><br />
<br/><br />
{{Warning|title=Attention|content=Be careful with policies because an ill-conceived policy can lock you out of your data container.}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only supports a [[Arbutus_object_storage#Policy_subset|subset]] of the AWS specification for [https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html data container polices]. The following example shows how to create, apply, and view a policy. The first step is create a policy json file:<br />
<br />
<!--T:45--><br />
<syntaxhighlight lang="json"><br />
{<br />
"Version": "2012-10-17",<br />
"Id": "S3PolicyId1",<br />
"Statement": [<br />
{<br />
"Sid": "IPAllow",<br />
"Effect": "Deny",<br />
"Principal": "*",<br />
"Action": "s3:*",<br />
"Resource": [<br />
"arn:aws:s3:::testbucket",<br />
"arn:aws:s3:::testbucket/*"<br />
],<br />
"Condition": {<br />
"NotIpAddress": {<br />
"aws:SourceIp": "206.12.0.0/16",<br />
"aws:SourceIp": "142.104.0.0/16"<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</syntaxhighlight><br />
<br />
<!--T:46--><br />
This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.<br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the data container:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
== Policy subset == <!--T:47--><br />
<br />
<!--T:48--><br />
Currently, we support only the following actions:<br />
<br />
<!--T:49--><br />
* s3:AbortMultipartUpload<br />
* s3:CreateBucket<br />
* s3:DeleteBucketPolicy<br />
* s3:DeleteBucket<br />
* s3:DeleteBucketWebsite<br />
* s3:DeleteObject<br />
* s3:DeleteObjectVersion<br />
* s3:DeleteReplicationConfiguration<br />
* s3:GetAccelerateConfiguration<br />
* s3:GetBucketAcl<br />
* s3:GetBucketCORS<br />
* s3:GetBucketLocation<br />
* s3:GetBucketLogging<br />
* s3:GetBucketNotification<br />
* s3:GetBucketPolicy<br />
* s3:GetBucketRequestPayment<br />
* s3:GetBucketTagging<br />
* s3:GetBucketVersioning<br />
* s3:GetBucketWebsite<br />
* s3:GetLifecycleConfiguration<br />
* s3:GetObjectAcl<br />
* s3:GetObject<br />
* s3:GetObjectTorrent<br />
* s3:GetObjectVersionAcl<br />
* s3:GetObjectVersion<br />
* s3:GetObjectVersionTorrent<br />
* s3:GetReplicationConfiguration<br />
* s3:IPAddress<br />
* s3:NotIpAddress<br />
* s3:ListAllMyBuckets<br />
* s3:ListBucketMultipartUploads<br />
* s3:ListBucket<br />
* s3:ListBucketVersions<br />
* s3:ListMultipartUploadParts<br />
* s3:PutAccelerateConfiguration<br />
* s3:PutBucketAcl<br />
* s3:PutBucketCORS<br />
* s3:PutBucketLogging<br />
* s3:PutBucketNotification<br />
* s3:PutBucketPolicy<br />
* s3:PutBucketRequestPayment<br />
* s3:PutBucketTagging<br />
* s3:PutBucketVersioning<br />
* s3:PutBucketWebsite<br />
* s3:PutLifecycleConfiguration<br />
* s3:PutObjectAcl<br />
* s3:PutObject<br />
* s3:PutObjectVersionAcl<br />
* s3:PutReplicationConfiguration<br />
* s3:RestoreObject<br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Working_with_volumes&diff=131752Working with volumes2023-03-15T16:03:17Z<p>Rmc: Added details on how to persist a mount</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
<!--T:1--><br />
A volume provides storage which is not destroyed when a VM is terminated. On our clouds, volumes use [https://en.wikipedia.org/wiki/Ceph_(software) Ceph] storage with either a 3-fold replication factor or [https://en.wikipedia.org/wiki/Erasure_code erasure codes] to provide safety against hardware failure. On [[Cloud_resources|Arbutus]], the <i>Default</i> volume type uses erasure codes to provide data safety while reducing the extra storage costs of 3-fold replication while the <i>OS or Database</i> volume type still uses the 3-fold replication factor. More documentation about OpenStack volumes can be found [https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html here].<br />
<br />
=Creating a volume= <!--T:2--><br />
<br />
<!--T:3--><br />
[[File:Creating_a_volume_EN.png|300px|thumb| Create Volume dialog (Click for larger image)]]<br />
<br />
<!--T:4--><br />
To create a volume click on [[File:Create-Volume-Button.png]] and fill in the following fields:<br />
<br />
<!--T:5--><br />
*<i>Volume Name</i>: <code>data</code>, for example<br/><br />
*<i>Description</i>: (optional)<br />
*<i>Volume Source</i>: <code>No source, empty volume</code><br/><br />
*<i>Type</i>: <code>No volume type</code><br/><br />
*<i>Size (GiB)</i>: <code>40</code>, or some suitable size for your data or operating system<br/><br />
*<i>Availability Zone</i>: the only option is <code>nova</code><br/><br />
<br />
<!--T:6--><br />
Finally, click on the blue <i>Create Volume</i> button at the bottom.<br />
<br />
=Mounting a volume on a VM= <!--T:7--><br />
==Attaching a volume==<br />
[[File:Manage_attachments_EN.png|400px|thumb| Managing attachments command in the Actions menu (Click for larger image)]]<br />
* <b>Attaching</b> is the process of associating a volume with a VM. This is analogous to inserting a USB key or plugging an external drive into your personal computer.<br />
* You can attach a volume from the <i>Volumes</i> page in the dashboard.<br />
* At the right-hand end of the line describing the volume is the <i>Actions</i> column; from the drop-down menu, select <i>Manage Attachments</i>.<br />
* In the <i>Attach to Instance</i> drop-down menu, select a VM. <br />
* Click on the blue <i>Attach Volume</i> button. <br />
Attaching should complete in a few seconds. Then the volumes page will show the newly created volume attached to your selected VM on <code>/dev/vdb</code> or some similar location.<br />
==Formatting a newly created volume==<br />
* <b>DO NOT FORMAT</b> if you are attaching an existing volume. Instead you can skip this step as the volume would have already been formatted if you had been previously using it to store data.<br />
* <b>Formatting</b> erases all existing information on a volume and therefore should be done with care.<br />
* Formatting is the process of preparing a volume to store directories and files.<br />
* Before a newly created and attached volume can be used, it must be formatted.<br />
* See instructions for doing this on a [[Using a new empty volume on a Linux VM|Linux]] or [[Using a new empty volume on a Windows VM|Windows]] VM.<br />
<br />
==Mounting a volume== <!--T:23--><br />
* '''Mounting''' is the process of mapping the volume's directory and file structure logically within the VM's directory and file structure.<br />
* To mount the volume, use a command similar to <code>[name@server ~]$ sudo mount /dev/vdb1 /mnt</code> depending on the device name, disk layout, and the desired mount point in your filesystem.<br />
This command makes the volume's directory and file structure available under the VM's /mnt directory. However, when the virtual machine reboots, the volume will need to be re-mounted using the same <code>mount</code> command.<br />
<br />
<!--T:24--><br />
It is possible to automatically mount volumes when a virtual machine boots. This requires editing the file named /etc/fstab to contain a new line with details about how the volume should be mounted.<br />
<br />
To view mounting information, use the 'blkid' command:<br />
<code>blkid</code><br />
<br />
Based on the UUID, add a line to /etc/fstab like this:<br />
<br />
<code>/dev/disk/by-uuid/anananan-anan-anana-anan-ananananana /mnt auto defaults,nofail 0 3</code><br />
<br />
Where 'anananan-anan-anana-anan-ananananana' is substituted with UUID of the device you wish to auto-mount.<br />
<br />
For more details about how to edit this file see this [https://help.ubuntu.com/community/Fstab Ubuntu community help page].<br />
<br />
=Booting from a volume= <!--T:8--><br />
If you want to run a persistent machine, it is safest to boot from a volume. When you boot a VM from an image rather than a volume, the VM is stored on the local disk of the actual machine running the VM. If something goes wrong with that machine or its disk, the VM may be lost. Volume storage has redundancy, which protects the VM from hardware failure. Typically when booting from a volume VM flavors starting with the letter p are used (see [[Virtual machine flavors]]).<br />
<br />
<!--T:9--><br />
There are several ways to boot a VM from a volume. You can <br />
* boot from an image, creating a new volume, or <br />
* boot from a pre-existing volume, or<br />
* boot from a volume snapshot, creating a new volume.<br />
<br />
<!--T:10--><br />
If you have not done this before, then the first one is your only option. The other two are only possible if you have already created a bootable volume or a volume snapshot.<br />
<br />
<!--T:11--><br />
If creating a volume as part of the process of launching the VM, select <i>Boot from image (creates a new volume)</i>, select the image to use, and the size of the volume. If this volume is something you would like to remain longer than the VM, ensure that the <i>Delete on Terminate</i> box is not checked. If you are unsure about this option, it is better to leave this box unchecked. You can manually delete the volume later.<br />
<br />
=Creating an image from a volume= <!--T:12--><br />
[[File:Upload_volume_from_image_EN.png|400px|thumb| Upload to Image form (Click for larger image)]]<!--Note to translator: there is a FR version of this screen shot at [[File:Os-upload-volume-to-image-fr.png]]--><br />
Creating an image from a volume allows you to download the image. Do this if you want to save it as a backup, or to spin up a VM on a different cloud, e.g., with [https://www.virtualbox.org/ VirtualBox]. If you want to copy a volume to a new volume within the same cloud see [[#Cloning a Volume|cloning a volume]] instead. <br />
<br />
<!--T:21--><br />
To create an image of a volume, it must first be detached from a VM. If it is a boot (root) volume, it can only be detached from a VM if the VM is terminated/deleted; however, make sure you have not checked <i>Delete Volume on Instance Delete</i> when creating the VM.<br />
<br />
<!--T:22--><br />
Large images (more than 10-20GB) may be very slow to create, upload, and otherwise manage. You may want to consider [[Backing_up_your_VM#An_example_backup_strategy | separating data]] if possible.<br />
<br />
==Using the dashboard== <!--T:13--><br />
# Click on the <i>Volumes</i> left-hand menu.<br />
# Under the volume you wish to create an image of click on the drop-down <i>Actions</i> menu and select <i>Upload to Image</i>.<br />
# Choose a name for your new image.<br />
# Choose a disk format. QCOW2 is recommended for using within the OpenStack cloud as it is relatively compact compared to <i>Raw</i> and works well with OpenStack. If you wish to use the image with Virtualbox, the <i>vmdk</i> or <i>vdi</i> image formats might be better suited.<br />
# Finally, click on <i>Upload</i>.<br />
<br />
==Using the command line client== <!--T:14--><br />
The [[OpenStack command line clients|command line client]] can do this:<br />
{{Command|openstack image create --disk-format <format> --volume <volume_name> <image_name>}}<br />
where <br />
* <format> is the disk format (two possible values are [https://en.wikipedia.org/wiki/Qcow qcow2] and [https://en.wikipedia.org/wiki/VMDK vmdk]),<br />
* <volume_name> can be found from the OpenStack dashboard by clicking on the volume name, and<br />
* <image_name> is a name you choose for the image.<br />
You can then [[Working_with_images#Downloading_an_Image|download the image]]. <br />
<br />
=Cloning a volume= <!--T:15--><br />
Cloning is the recommended method for copying volumes. While it is possible to make an image of an existing volume and use it to create a new volume, cloning is much faster and requires less movement of data behind the scenes. This method is handy if you have a persistent VM and you want to test out something before doing it on your production site. It is highly recommended to shut down your VM before creating a clone of the volume as the newly created volume may be left in an inconsistent state if there was writing to the source volume during the time the clone was created. To create a clone you must use the [[OpenStack command line clients|command line client]] with this command<br />
{{Command|openstack volume create --source <source-volume-id> --size <size-of-new-volume> <name-of-new-volume>}}<br />
<br />
=Detaching a volume= <!--T:16--><br />
Before detaching a volume, it is important to make sure that the operating system and other programs running on your VM are not accessing files on this volume. If so, the detached volume can be left in a corrupted state or the programs could show unexpected behaviours. To avoid this, you can either shut down the VM before you detach the volume or [[Using_a_new_empty_volume_on_a_Linux_VM#Unmounting_a_volume_or_device|unmount the volume]].<br />
<br />
<!--T:17--><br />
To detach a volume, log in to the OpenStack dashboard (see the [[Cloud#Cloud_systems|list of links to our cloud systems]]) and select the project containing the volume you wish to detach. Selecting <i>Volumes -> Volumes</i> displays the project’s volumes. For each volume, the <i>Attached to</i> column indicates where the volume is attached. <br />
<br />
<!--T:18--><br />
*If attached to <code>/dev/vda</code>, it is a boot volume; you must delete the attached VM before the volume can be detached otherwise you will get the error message ''Unable to detach volume''.<br />
<br />
<!--T:19--><br />
*With volumes attached to <code>/dev/vdb</code>, <code>/dev/vdc</code>, etc. you do not need to delete the VM it is attached to before proceeding. In the ''Actions'' column drop-down list, select ''Manage Attachments'', click on the ''Detach Volume'' button and again on the next ''Detach Volume'' button to confirm.<br />
<br />
<!--T:20--><br />
[[Category:Cloud]]<br />
</translate></div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Working_with_volumes&diff=131750Working with volumes2023-03-15T15:58:20Z<p>Rmc: /* Formatting a newly created volume */ moved up bold format on formatting to first instance</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
<!--T:1--><br />
A volume provides storage which is not destroyed when a VM is terminated. On our clouds, volumes use [https://en.wikipedia.org/wiki/Ceph_(software) Ceph] storage with either a 3-fold replication factor or [https://en.wikipedia.org/wiki/Erasure_code erasure codes] to provide safety against hardware failure. On [[Cloud_resources|Arbutus]], the <i>Default</i> volume type uses erasure codes to provide data safety while reducing the extra storage costs of 3-fold replication while the <i>OS or Database</i> volume type still uses the 3-fold replication factor. More documentation about OpenStack volumes can be found [https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html here].<br />
<br />
=Creating a volume= <!--T:2--><br />
<br />
<!--T:3--><br />
[[File:Creating_a_volume_EN.png|300px|thumb| Create Volume dialog (Click for larger image)]]<br />
<br />
<!--T:4--><br />
To create a volume click on [[File:Create-Volume-Button.png]] and fill in the following fields:<br />
<br />
<!--T:5--><br />
*<i>Volume Name</i>: <code>data</code>, for example<br/><br />
*<i>Description</i>: (optional)<br />
*<i>Volume Source</i>: <code>No source, empty volume</code><br/><br />
*<i>Type</i>: <code>No volume type</code><br/><br />
*<i>Size (GiB)</i>: <code>40</code>, or some suitable size for your data or operating system<br/><br />
*<i>Availability Zone</i>: the only option is <code>nova</code><br/><br />
<br />
<!--T:6--><br />
Finally, click on the blue <i>Create Volume</i> button at the bottom.<br />
<br />
=Mounting a volume on a VM= <!--T:7--><br />
==Attaching a volume==<br />
[[File:Manage_attachments_EN.png|400px|thumb| Managing attachments command in the Actions menu (Click for larger image)]]<br />
* <b>Attaching</b> is the process of associating a volume with a VM. This is analogous to inserting a USB key or plugging an external drive into your personal computer.<br />
* You can attach a volume from the <i>Volumes</i> page in the dashboard.<br />
* At the right-hand end of the line describing the volume is the <i>Actions</i> column; from the drop-down menu, select <i>Manage Attachments</i>.<br />
* In the <i>Attach to Instance</i> drop-down menu, select a VM. <br />
* Click on the blue <i>Attach Volume</i> button. <br />
Attaching should complete in a few seconds. Then the volumes page will show the newly created volume attached to your selected VM on <code>/dev/vdb</code> or some similar location.<br />
==Formatting a newly created volume==<br />
* <b>DO NOT FORMAT</b> if you are attaching an existing volume. Instead you can skip this step as the volume would have already been formatted if you had been previously using it to store data.<br />
* <b>Formatting</b> erases all existing information on a volume and therefore should be done with care.<br />
* Formatting is the process of preparing a volume to store directories and files.<br />
* Before a newly created and attached volume can be used, it must be formatted.<br />
* See instructions for doing this on a [[Using a new empty volume on a Linux VM|Linux]] or [[Using a new empty volume on a Windows VM|Windows]] VM.<br />
<br />
==Mounting a volume== <!--T:23--><br />
* '''Mounting''' is the process of mapping the volume's directory and file structure logically within the VM's directory and file structure.<br />
* To mount the volume, use a command similar to <code>[name@server ~]$ sudo mount /dev/vdb1 /mnt</code> depending on the device name, disk layout, and the desired mount point in your filesystem.<br />
This command makes the volume's directory and file structure available under the VM's /mnt directory. However, when the virtual machine reboots, the volume will need to be re-mounted using the same <code>mount</code> command.<br />
<br />
<!--T:24--><br />
It is possible to automatically mount volumes when a virtual machine boots. This requires editing the file named /etc/fstab to contain a new line with details about how the volume should be mounted. For more details about how to edit this file see this [https://help.ubuntu.com/community/Fstab Ubuntu community help page].<br />
<br />
=Booting from a volume= <!--T:8--><br />
If you want to run a persistent machine, it is safest to boot from a volume. When you boot a VM from an image rather than a volume, the VM is stored on the local disk of the actual machine running the VM. If something goes wrong with that machine or its disk, the VM may be lost. Volume storage has redundancy, which protects the VM from hardware failure. Typically when booting from a volume VM flavors starting with the letter p are used (see [[Virtual machine flavors]]).<br />
<br />
<!--T:9--><br />
There are several ways to boot a VM from a volume. You can <br />
* boot from an image, creating a new volume, or <br />
* boot from a pre-existing volume, or<br />
* boot from a volume snapshot, creating a new volume.<br />
<br />
<!--T:10--><br />
If you have not done this before, then the first one is your only option. The other two are only possible if you have already created a bootable volume or a volume snapshot.<br />
<br />
<!--T:11--><br />
If creating a volume as part of the process of launching the VM, select <i>Boot from image (creates a new volume)</i>, select the image to use, and the size of the volume. If this volume is something you would like to remain longer than the VM, ensure that the <i>Delete on Terminate</i> box is not checked. If you are unsure about this option, it is better to leave this box unchecked. You can manually delete the volume later.<br />
<br />
=Creating an image from a volume= <!--T:12--><br />
[[File:Upload_volume_from_image_EN.png|400px|thumb| Upload to Image form (Click for larger image)]]<!--Note to translator: there is a FR version of this screen shot at [[File:Os-upload-volume-to-image-fr.png]]--><br />
Creating an image from a volume allows you to download the image. Do this if you want to save it as a backup, or to spin up a VM on a different cloud, e.g., with [https://www.virtualbox.org/ VirtualBox]. If you want to copy a volume to a new volume within the same cloud see [[#Cloning a Volume|cloning a volume]] instead. <br />
<br />
<!--T:21--><br />
To create an image of a volume, it must first be detached from a VM. If it is a boot (root) volume, it can only be detached from a VM if the VM is terminated/deleted; however, make sure you have not checked <i>Delete Volume on Instance Delete</i> when creating the VM.<br />
<br />
<!--T:22--><br />
Large images (more than 10-20GB) may be very slow to create, upload, and otherwise manage. You may want to consider [[Backing_up_your_VM#An_example_backup_strategy | separating data]] if possible.<br />
<br />
==Using the dashboard== <!--T:13--><br />
# Click on the <i>Volumes</i> left-hand menu.<br />
# Under the volume you wish to create an image of click on the drop-down <i>Actions</i> menu and select <i>Upload to Image</i>.<br />
# Choose a name for your new image.<br />
# Choose a disk format. QCOW2 is recommended for using within the OpenStack cloud as it is relatively compact compared to <i>Raw</i> and works well with OpenStack. If you wish to use the image with Virtualbox, the <i>vmdk</i> or <i>vdi</i> image formats might be better suited.<br />
# Finally, click on <i>Upload</i>.<br />
<br />
==Using the command line client== <!--T:14--><br />
The [[OpenStack command line clients|command line client]] can do this:<br />
{{Command|openstack image create --disk-format <format> --volume <volume_name> <image_name>}}<br />
where <br />
* <format> is the disk format (two possible values are [https://en.wikipedia.org/wiki/Qcow qcow2] and [https://en.wikipedia.org/wiki/VMDK vmdk]),<br />
* <volume_name> can be found from the OpenStack dashboard by clicking on the volume name, and<br />
* <image_name> is a name you choose for the image.<br />
You can then [[Working_with_images#Downloading_an_Image|download the image]]. <br />
<br />
=Cloning a volume= <!--T:15--><br />
Cloning is the recommended method for copying volumes. While it is possible to make an image of an existing volume and use it to create a new volume, cloning is much faster and requires less movement of data behind the scenes. This method is handy if you have a persistent VM and you want to test out something before doing it on your production site. It is highly recommended to shut down your VM before creating a clone of the volume as the newly created volume may be left in an inconsistent state if there was writing to the source volume during the time the clone was created. To create a clone you must use the [[OpenStack command line clients|command line client]] with this command<br />
{{Command|openstack volume create --source <source-volume-id> --size <size-of-new-volume> <name-of-new-volume>}}<br />
<br />
=Detaching a volume= <!--T:16--><br />
Before detaching a volume, it is important to make sure that the operating system and other programs running on your VM are not accessing files on this volume. If so, the detached volume can be left in a corrupted state or the programs could show unexpected behaviours. To avoid this, you can either shut down the VM before you detach the volume or [[Using_a_new_empty_volume_on_a_Linux_VM#Unmounting_a_volume_or_device|unmount the volume]].<br />
<br />
<!--T:17--><br />
To detach a volume, log in to the OpenStack dashboard (see the [[Cloud#Cloud_systems|list of links to our cloud systems]]) and select the project containing the volume you wish to detach. Selecting <i>Volumes -> Volumes</i> displays the project’s volumes. For each volume, the <i>Attached to</i> column indicates where the volume is attached. <br />
<br />
<!--T:18--><br />
*If attached to <code>/dev/vda</code>, it is a boot volume; you must delete the attached VM before the volume can be detached otherwise you will get the error message ''Unable to detach volume''.<br />
<br />
<!--T:19--><br />
*With volumes attached to <code>/dev/vdb</code>, <code>/dev/vdc</code>, etc. you do not need to delete the VM it is attached to before proceeding. In the ''Actions'' column drop-down list, select ''Manage Attachments'', click on the ''Detach Volume'' button and again on the next ''Detach Volume'' button to confirm.<br />
<br />
<!--T:20--><br />
[[Category:Cloud]]<br />
</translate></div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Virtual_machine_flavors&diff=131652Virtual machine flavors2023-03-14T17:03:19Z<p>Rmc: Clarified that researchers can only see the flavors they have access to and that they are also visible in the Dashboard</p>
<hr />
<div><languages /><br />
<translate><br />
<!--T:1--><br />
''Parent page: [[Cloud]]''<br />
<br />
<!--T:2--><br />
{{box|Virtual hardware templates are called "flavors" in OpenStack, defining sizes for RAM, disk, number of cores, and so on. ...<br />
Flavors define a number of parameters, resulting in the user having a choice of what type of virtual machine to run—just like they would have if they were purchasing a physical server. - [http://netapp.github.io/openstack-deploy-ops-guide/icehouse/content/section_nova-key-concepts.html ''NetApp OpenStack Deployment and Operations Guide'']}}<br />
<br />
<!--T:7--><br />
Researchers are able to view all the flavors they have been allocated for their project. These can be seen in the Horizon Dashboard and via the [[OpenStack command line clients]] with the following command:<br />
{{Command|openstack flavor list --sort-column RAM}}<br />
<br />
If you have a project and need a flavour not currently allocated, please email cloud@tech.alliancecan.ca.<br />
<br />
<!--T:3--><br />
Virtual machine flavors have names like:<br />
c2-7.5gb-92<br />
p1-0.75gb<br />
g1-8gb-c4-22gb<br />
By convention the prefix "c" designates "compute", "p" designates "persistent", and "g" designates "vGPU". The prefix is followed by the number of virtual vCPUs/vGPUs, then the amount of RAM after the dash. If a second dash is present it is followed by the size of secondary ephemeral disk in gigabytes. In the case of vGPUs, the compute flavour is included after the vGPU information.<br />
<br />
[[File:Flavors.png|thumb|alt=Openstack flavors|Openstack flavors]]<br />
<br />
<!--T:4--><br />
A virtual machine of "c" flavor is intended for jobs of finite lifetime and for development and testing tasks. It starts from a [https://en.wikipedia.org/wiki/Qcow qcow2]-format image. Its disks reside on the local hardware running the VM and have no redundancy ([https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_0 raid0]). The root disk is typically 20GB in size. "c" flavor VMs also have an secondary ephemeral data disk. These storage devices are created and destroyed with the instance. The Arbutus cloud treats “c” flavors differently as they have no over-commit on CPU so are targeted towards CPU intensive tasks.<br />
<br />
<!--T:5--><br />
A virtual machine of "p" flavor is intended to run for an indeterminate length of time. There is no predefined root disk. The intended use of "p" flavors is that they should be [[Working_with_volumes#Booting_from_a_volume|booted from a volume]], in which case the instance will be backed by the Ceph storage system and have greater redundancy and resistance to failure than a "c" instance. We recommend using a volume size of at least 20GB for the persistent VM root disk. The Arbutus cloud treats “p” flavors differently as they will be on compute nodes with a higher level of redundancy (disk and network) and do over-commit the CPU so are geared towards web servers, data base servers and instances that have a lower CPU or bursty CPU usage profile in general.<br />
<br />
<!--T:6--><br />
[[Category:Cloud]]<br />
</translate></div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Virtual_machine_flavors&diff=131651Virtual machine flavors2023-03-14T17:00:07Z<p>Rmc: fixed addition of vGPUs</p>
<hr />
<div><languages /><br />
<translate><br />
<!--T:1--><br />
''Parent page: [[Cloud]]''<br />
<br />
<!--T:2--><br />
{{box|Virtual hardware templates are called "flavors" in OpenStack, defining sizes for RAM, disk, number of cores, and so on. ...<br />
Flavors define a number of parameters, resulting in the user having a choice of what type of virtual machine to run—just like they would have if they were purchasing a physical server. - [http://netapp.github.io/openstack-deploy-ops-guide/icehouse/content/section_nova-key-concepts.html ''NetApp OpenStack Deployment and Operations Guide'']}}<br />
<br />
<!--T:7--><br />
All virtual machine flavors supported on a given Compute Canada cloud can be obtained from the [[OpenStack command line clients]] with the following command:<br />
{{Command|openstack flavor list --sort-column RAM}}<br />
<br />
<!--T:3--><br />
Virtual machine flavors have names like:<br />
c2-7.5gb-92<br />
p1-0.75gb<br />
g1-8gb-c4-22gb<br />
By convention the prefix "c" designates "compute", "p" designates "persistent", and "g" desingates "vGPU". The prefix is followed by the number of virtual vCPUs/vGPUs, then the amount of RAM after the dash. If a second dash is present it is followed by the size of secondary ephemeral disk in gigabytes. In the case of vGPUs, the compute flavour is included after the vGPU information.<br />
<br />
[[File:Flavors.png|thumb|alt=Openstack flavors|Openstack flavors]]<br />
<br />
<!--T:4--><br />
A virtual machine of "c" flavor is intended for jobs of finite lifetime and for development and testing tasks. It starts from a [https://en.wikipedia.org/wiki/Qcow qcow2]-format image. Its disks reside on the local hardware running the VM and have no redundancy ([https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_0 raid0]). The root disk is typically 20GB in size. "c" flavor VMs also have an secondary ephemeral data disk. These storage devices are created and destroyed with the instance. The Arbutus cloud treats “c” flavors differently as they have no over-commit on CPU so are targeted towards CPU intensive tasks.<br />
<br />
<!--T:5--><br />
A virtual machine of "p" flavor is intended to run for an indeterminate length of time. There is no predefined root disk. The intended use of "p" flavors is that they should be [[Working_with_volumes#Booting_from_a_volume|booted from a volume]], in which case the instance will be backed by the Ceph storage system and have greater redundancy and resistance to failure than a "c" instance. We recommend using a volume size of at least 20GB for the persistent VM root disk. The Arbutus cloud treats “p” flavors differently as they will be on compute nodes with a higher level of redundancy (disk and network) and do over-commit the CPU so are geared towards web servers, data base servers and instances that have a lower CPU or bursty CPU usage profile in general.<br />
<br />
<!--T:6--><br />
[[Category:Cloud]]<br />
</translate></div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Virtual_machine_flavors&diff=131650Virtual machine flavors2023-03-14T16:55:15Z<p>Rmc: Added a screen shot of the flavors</p>
<hr />
<div><languages /><br />
<translate><br />
<!--T:1--><br />
''Parent page: [[Cloud]]''<br />
<br />
<!--T:2--><br />
{{box|Virtual hardware templates are called "flavors" in OpenStack, defining sizes for RAM, disk, number of cores, and so on. ...<br />
Flavors define a number of parameters, resulting in the user having a choice of what type of virtual machine to run—just like they would have if they were purchasing a physical server. - [http://netapp.github.io/openstack-deploy-ops-guide/icehouse/content/section_nova-key-concepts.html ''NetApp OpenStack Deployment and Operations Guide'']}}<br />
<br />
<!--T:7--><br />
All virtual machine flavors supported on a given Compute Canada cloud can be obtained from the [[OpenStack command line clients]] with the following command:<br />
{{Command|openstack flavor list --sort-column RAM}}<br />
<br />
<!--T:3--><br />
Virtual machine flavors have names like:<br />
c2-7.5gb-92<br />
p1-0.75gb<br />
By convention the prefix "c" designates "compute" and "p" designates "persistent". The prefix is followed by the number of virtual CPUs, then the amount of RAM after the dash. If a second dash is present it is followed by the size of secondary ephemeral disk in gigabytes. <br />
<br />
[[File:Flavors.png|thumb|alt=Openstack flavors|Openstack flavors]]<br />
<br />
<!--T:4--><br />
A virtual machine of "c" flavor is intended for jobs of finite lifetime and for development and testing tasks. It starts from a [https://en.wikipedia.org/wiki/Qcow qcow2]-format image. Its disks reside on the local hardware running the VM and have no redundancy ([https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_0 raid0]). The root disk is typically 20GB in size. "c" flavor VMs also have an secondary ephemeral data disk. These storage devices are created and destroyed with the instance. The Arbutus cloud treats “c” flavors differently as they have no over-commit on CPU so are targeted towards CPU intensive tasks.<br />
<br />
<!--T:5--><br />
A virtual machine of "p" flavor is intended to run for an indeterminate length of time. There is no predefined root disk. The intended use of "p" flavors is that they should be [[Working_with_volumes#Booting_from_a_volume|booted from a volume]], in which case the instance will be backed by the Ceph storage system and have greater redundancy and resistance to failure than a "c" instance. We recommend using a volume size of at least 20GB for the persistent VM root disk. The Arbutus cloud treats “p” flavors differently as they will be on compute nodes with a higher level of redundancy (disk and network) and do over-commit the CPU so are geared towards web servers, data base servers and instances that have a lower CPU or bursty CPU usage profile in general.<br />
<br />
<!--T:6--><br />
[[Category:Cloud]]<br />
</translate></div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=File:Flavors.png&diff=131649File:Flavors.png2023-03-14T16:54:35Z<p>Rmc: </p>
<hr />
<div>Screen shot of flavor names in openstack</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Cloud_Account_Lifecycle_Management&diff=131648Cloud Account Lifecycle Management2023-03-14T16:36:12Z<p>Rmc: Inital version of the CALM page</p>
<hr />
<div><languages /><br />
<br />
<translate><br />
Cloud Account Lifecycle Management (CALM) is the process by which projects and accounts are provisioned and eventually deprovisioned throughout a lifecycle.<br />
<br />
The type of lifecycle which is tied to the type of allocation given the PI/group by CCDB/the Alliance.<br />
<br />
Upon provisioning a cloud project, CCDB accounts are given roles which provide access the project on the cloud infrastructure<br />
<br />
‘Local’ accounts are used within VMs and outside the scope of CALM<br />
<br />
</translate><br />
<br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130983Arbutus object storage2023-03-07T17:37:54Z<p>Rmc: added link to the cloud storage options page</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
If you are considering Arbutus object storage against other cloud storage options, they are described in our page on [[Cloud storage options]].<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Establishing access to your Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
= Accessing your Arbutus Object Store = <!--T:35--><br />
Setting access policies cannot be done via web browser but must be done with a [[Arbutus object storage clients|SWIFT or S3-compatible client]]. There are two ways to access your data containers:<br />
<br />
<!--T:21--><br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd).<br />
# if your object storage policies are set to public (not default), object storage is accessible using a browser via an HTTPS endpoint:<br />
<code>https://object-arbutus.cloud.computecanada.ca:443/DATA_CONTAINER/FILENAME</code><br />
<br />
= Managing your Arbutus object store = <!--T:36--><br />
<br />
<!--T:15--><br />
The recommended way to manage buckets and objects in the Arbutus Object Store is by using the <code>s3cmd</code> tool, which is available in Linux.<br />
Our documentation provides specific instructions on [[Accessing_object_storage_with_s3cmd|configuring and managing access]] with the <code>s3cmd</code> client<br />
We can also use other [[Arbutus object storage clients|S3-compatible clients]] that are also compatible with Arbutus Object Store.<br />
<br />
<!--T:10--><br />
In addition, we can perform certain management tasks for our object storage using the [https://arbutus.cloud.computecanada.ca/project/containers Containers] section under the '''Object Store''' tab in the [https://arbutus.cloud.computecanada.ca Arbutus OpenStack Dashboard].<br />
<br />
<!--T:37--><br />
This interface refers to "data containers", which are also known as "buckets" in other object storage systems.<br />
<br />
<!--T:38--><br />
Using the dashboard, we can create new data containers, upload files, and create directories. Alternatively, we can also create data containers using [[Arbutus object storage clients|S3-compatible clients]].<br />
<br />
<!--T:39--><br />
{{quote|Please note that data containers are owned by the user who creates them and cannot be manipulated by others.<br/>Therefore, you are responsible for managing your data containers and their contents within your cloud project.}}<br />
<br />
<!--T:40--><br />
If you create a new container as '''Public''', anyone on the Internet can read its contents by simply navigating to <br />
<br />
<!--T:41--><br />
<code><br />
<nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki><br />
</code><br />
<br />
<!--T:42--><br />
with your container and object names inserted in place.<br />
<br />
<!--T:43--><br />
{{quote|It's important to keep in mind that each data container on the '''Arbutus Object Store''' must have a '''unique name across all users'''. To ensure uniqueness, we may want to prefix our data container names with our project name to avoid conflicts with other users. One useful rule of thumb is to refrain from using generic names like <code>test</code> for data containers. Instead, consider using more specific and unique names like <code>def-myname-test</code>.}}<br />
<br />
<!--T:44--><br />
To make a data container accessible to the public, we can change its policy to allow public access. This can come in handy if we need to share files to a wider audience. We can manage container policies using JSON files, allowing us to specify various access controls for our containers and objects.<br />
<br />
== Managing data container (bucket) policies for your Arbutus Object Store == <!--T:31--><br />
<br/><br />
{{Warning|title=Attention|content=Be careful with policies because an ill-conceived policy can lock you out of your data container.}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only supports a [[Arbutus_object_storage#Policy_subset|subset]] of the AWS specification for [https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html data container polices]. The following example shows how to create, apply, and view a policy. The first step is create a policy json file:<br />
<br />
<!--T:45--><br />
<syntaxhighlight lang="json"><br />
{<br />
"Version": "2012-10-17",<br />
"Id": "S3PolicyId1",<br />
"Statement": [<br />
{<br />
"Sid": "IPAllow",<br />
"Effect": "Deny",<br />
"Principal": "*",<br />
"Action": "s3:*",<br />
"Resource": [<br />
"arn:aws:s3:::testbucket",<br />
"arn:aws:s3:::testbucket/*"<br />
],<br />
"Condition": {<br />
"NotIpAddress": {<br />
"aws:SourceIp": "206.12.0.0/16",<br />
"aws:SourceIp": "142.104.0.0/16"<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</syntaxhighlight><br />
<br />
<!--T:46--><br />
This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.<br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the data container:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
== Policy subset == <!--T:47--><br />
<br />
<!--T:48--><br />
Currently, we support only the following actions:<br />
<br />
<!--T:49--><br />
* s3:AbortMultipartUpload<br />
* s3:CreateBucket<br />
* s3:DeleteBucketPolicy<br />
* s3:DeleteBucket<br />
* s3:DeleteBucketWebsite<br />
* s3:DeleteObject<br />
* s3:DeleteObjectVersion<br />
* s3:DeleteReplicationConfiguration<br />
* s3:GetAccelerateConfiguration<br />
* s3:GetBucketAcl<br />
* s3:GetBucketCORS<br />
* s3:GetBucketLocation<br />
* s3:GetBucketLogging<br />
* s3:GetBucketNotification<br />
* s3:GetBucketPolicy<br />
* s3:GetBucketRequestPayment<br />
* s3:GetBucketTagging<br />
* s3:GetBucketVersioning<br />
* s3:GetBucketWebsite<br />
* s3:GetLifecycleConfiguration<br />
* s3:GetObjectAcl<br />
* s3:GetObject<br />
* s3:GetObjectTorrent<br />
* s3:GetObjectVersionAcl<br />
* s3:GetObjectVersion<br />
* s3:GetObjectVersionTorrent<br />
* s3:GetReplicationConfiguration<br />
* s3:IPAddress<br />
* s3:NotIpAddress<br />
* s3:ListAllMyBuckets<br />
* s3:ListBucketMultipartUploads<br />
* s3:ListBucket<br />
* s3:ListBucketVersions<br />
* s3:ListMultipartUploadParts<br />
* s3:PutAccelerateConfiguration<br />
* s3:PutBucketAcl<br />
* s3:PutBucketCORS<br />
* s3:PutBucketLogging<br />
* s3:PutBucketNotification<br />
* s3:PutBucketPolicy<br />
* s3:PutBucketRequestPayment<br />
* s3:PutBucketTagging<br />
* s3:PutBucketVersioning<br />
* s3:PutBucketWebsite<br />
* s3:PutLifecycleConfiguration<br />
* s3:PutObjectAcl<br />
* s3:PutObject<br />
* s3:PutObjectVersionAcl<br />
* s3:PutReplicationConfiguration<br />
* s3:RestoreObject<br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130587Arbutus object storage2023-03-02T17:51:25Z<p>Rmc: Marked this version for translation</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Establishing access to your Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
= Accessing your Arbutus Object Store = <!--T:35--><br />
Setting access policies cannot be done via web browser but must be done with a [[Arbutus object storage clients|SWIFT or S3-compatible client]]. There are two ways to access your data containers:<br />
<br />
<!--T:21--><br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd).<br />
# if your object storage policies are set to public (not default), object storage is accessible using a browser via an HTTPS endpoint:<br />
<code>https://object-arbutus.cloud.computecanada.ca:443/DATA_CONTAINER/FILENAME</code><br />
<br />
= Managing your Arbutus object store = <!--T:36--><br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access the Arbutus Object Store. We have [[Accessing_object_storage_with_s3cmd|specific instructions]] on configuring and managing access with the s3cmd client. There are [[Arbutus object storage clients|multiple S3-compatible tools ]] that will work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab in the OpenStack Dashboard at https://arbutus.cloud.computecanada.ca/. This interface refers to "data containers". Data containers are also known as "buckets". In the dashboard you can create data containers, upload files, and create directories. Containers can also be created using S3-compatible CLI clients<br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the data containers and their management are up to the user including:<br />
<br />
<!--T:18--><br />
* Data containers are owned by the user who creates them, and no other user can manipulate them.<br />
* With a policy change, you can make a data container accessible to the world via URL<br />
* Data container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing data containers your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
== Managing data container (bucket) policies for your Arbutus Object Store == <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your data container.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html data container polices]]. The following example shows how to create, apply, and view a policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the data container:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130586Arbutus object storage2023-03-02T17:50:43Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Establishing access to your Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
= Accessing your Arbutus Object Store =<br />
Setting access policies cannot be done via web browser but must be done with a [[Arbutus object storage clients|SWIFT or S3-compatible client]]. There are two ways to access your data containers:<br />
<br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd).<br />
# if your object storage policies are set to public (not default), object storage is accessible using a browser via an HTTPS endpoint:<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/DATA_CONTAINER/FILENAME</code><br />
<br />
= Managing your Arbutus object store =<br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access the Arbutus Object Store. We have [[Accessing_object_storage_with_s3cmd|specific instructions]] on configuring and managing access with the s3cmd client. There are [[Arbutus object storage clients|multiple S3-compatible tools ]] that will work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab in the OpenStack Dashboard at https://arbutus.cloud.computecanada.ca/. This interface refers to "data containers". Data containers are also known as "buckets". In the dashboard you can create data containers, upload files, and create directories. Containers can also be created using S3-compatible CLI clients<br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the data containers and their management are up to the user including:<br />
<br />
<!--T:18--><br />
* Data containers are owned by the user who creates them, and no other user can manipulate them.<br />
* With a policy change, you can make a data container accessible to the world via URL<br />
* Data container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing data containers your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
== Managing data container (bucket) policies for your Arbutus Object Store == <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your data container.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html data container polices]]. The following example shows how to create, apply, and view a policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the data container:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130585Arbutus object storage2023-03-02T17:50:02Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Establishing access to your Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
= Accessing your Arbutus Object Store =<br />
Setting access policies cannot be done via web browser but must be done with a [[Arbutus object storage clients|SWIFT or S3-compatible client]]. There are two ways to access your data containers:<br />
<br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd).<br />
# if your object storage policies are set to public (not default), object storage is accessible using a browser via an HTTPS endpoint:<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/DATA_CONTAINER/FILENAME</code><br />
<br />
= Managing your Arbutus object store =<br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access the Arbutus Object Store. We have [[Accessing_object_storage_with_s3cmd|specific instructions]] on configuring and managing access with the s3cmd client. There are [[Arbutus object storage clients|multiple S3-compatible tools ]] that will work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab in the OpenStack Dashboard at https://arbutus.cloud.computecanada.ca/. This interface refers to "data containers". Data containers are also known as "buckets". In the dashboard you can create data containers, upload files, and create directories. Containers can also be created using S3-compatible CLI clients<br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the data containers and their management are up to the user including:<br />
<br />
<!--T:18--><br />
* Data containers are owned by the user who creates them, and no other user can manipulate them.<br />
* With a policy change, you can make a data container accessible to the world via URL<br />
* Data container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing data containers your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
== Managing data containers policies for your Arbutus Object Store == <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your data container.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html data container polices]]. The following example shows how to create, apply, and view a policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the data container:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130584Arbutus object storage2023-03-02T17:48:37Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Establishing access to your Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
= Accessing your Arbutus Object Store =<br />
Setting access policies cannot be done via web browser but must be done with a [[Arbutus object storage clients|SWIFT or S3-compatible client]]. There are two ways to access your data containers:<br />
<br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd).<br />
# if your object storage policies are set to public (not default), object storage is accessible using a browser via an HTTPS endpoint:<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/DATA_CONTAINER/FILENAME</code><br />
<br />
= Managing your Arbutus object store =<br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access the Arbutus Object Store. We have [[Accessing_object_storage_with_s3cmd|specific instructions]] on configuring and managing access with the s3cmd client. There are [[Arbutus object storage clients|multiple S3-compatible tools ]] that will work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab in the OpenStack Dashboard at https://arbutus.cloud.computecanada.ca/. This interface refers to "data containers". Data containers are also known as buckets. In the dashboard you can create data containers, upload files, and create directories. Containers can also be created using S3-compatible CLI clients<br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the data containers and their management are up to the user including:<br />
<br />
<!--T:18--><br />
* Data containers are owned by the user who creates them, and no other user can manipulate them.<br />
* With a policy change, you can make a data container accessible to the world via URL<br />
* Data container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing data containers your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
== Managing data containers policies for your Arbutus Object Store == <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your data container.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html data container polices]]. The following example shows how to create, apply, and view a policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130583Arbutus object storage2023-03-02T17:47:41Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Establishing access to your Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
= Accessing your Arbutus Object Store =<br />
Setting access policies cannot be done via web browser but must be done with a [[Arbutus object storage clients|SWIFT or S3-compatible client]]. There are two ways to access your data containers:<br />
<br />
# if your object storage policies are set to public (not default), object storage is accessible using a browser via an HTTPS endpoint:<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/DATA_CONTAINER/FILENAME</code><br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd).<br />
<br />
= Managing your Arbutus object store =<br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access the Arbutus Object Store. We have [[Accessing_object_storage_with_s3cmd|specific instructions]] on configuring and managing access with the s3cmd client. There are [[Arbutus object storage clients|multiple S3-compatible tools ]] that will work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab in the OpenStack Dashboard at https://arbutus.cloud.computecanada.ca/. This interface refers to "data containers". Data containers are also known as buckets. In the dashboard you can create data containers, upload files, and create directories. Containers can also be created using S3-compatible CLI clients<br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the data containers and their management are up to the user including:<br />
<br />
<!--T:18--><br />
* Data containers are owned by the user who creates them, and no other user can manipulate them.<br />
* With a policy change, you can make a data container accessible to the world via URL<br />
* Data container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing data containers your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
== Managing data containers policies for your Arbutus Object Store == <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your data container.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html data container polices]]. The following example shows how to create, apply, and view a policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130582Arbutus object storage2023-03-02T17:46:25Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Establishing access to your Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
= Managing your Arbutus object store =<br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access the Arbutus Object Store. We have [[Accessing_object_storage_with_s3cmd|specific instructions]] on configuring and managing access with the s3cmd client. There are [[Arbutus object storage clients|multiple S3-compatible tools ]] that will work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab in the OpenStack Dashboard at https://arbutus.cloud.computecanada.ca/. This interface refers to "data containers". Data containers are also known as buckets. In the dashboard you can create data containers, upload files, and create directories. Containers can also be created using S3-compatible CLI clients<br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the data containers and their management are up to the user including:<br />
<br />
<!--T:18--><br />
* Data containers are owned by the user who creates them, and no other user can manipulate them.<br />
* With a policy change, you can make a data container accessible to the world via URL<br />
* Data container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing data containers your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
== Managing data containers policies for your Arbutus Object Store == <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your data container.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html data container polices]]. The following example shows how to create, apply, and view a policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
<br />
== Accessing your Arbutus Object Store ==<br />
Setting access policies cannot be done via web browser but must be done with a [[Arbutus object storage clients|SWIFT or S3-compatible client]]. There are two ways to access your data containers:<br />
<br />
# if your object storage policies are set to public (not default), object storage is accessible using a browser via an HTTPS endpoint:<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/DATA_CONTAINER/FILENAME</code><br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd).<br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130581Arbutus object storage2023-03-02T17:42:04Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Establishing access to your Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
== Accessing your Arbutus Object Store ==<br />
There are two ways to access your data containers:<br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd). Managing your object store such as setting policies cannot be done via web browser and must be done with an S3-compatible client<br />
# if your object storage policies are set to public (not default), object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/DATA_CONTAINER</code><br />
<br />
= Managing your Arbutus object store =<br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access the Arbutus Object Store. We have [[Accessing_object_storage_with_s3cmd|specific instructions]] on configuring and managing access with the s3cmd client. There are [[Arbutus object storage clients|multiple S3-compatible tools ]] that will work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab in the OpenStack Dashboard at https://arbutus.cloud.computecanada.ca/. This interface refers to "data containers". Data containers are also known as buckets. In the dashboard you can create data containers, upload files, and create directories. Containers can also be created using S3-compatible CLI clients<br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the data containers and their management are up to the user including:<br />
<br />
<!--T:18--><br />
* Data containers are owned by the user who creates them, and no other user can manipulate them.<br />
* With a policy change, you can make a data container accessible to the world via URL<br />
* Data container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing data containers your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
== Managing data containers policies for your Arbutus Object Store == <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your data container.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html data container polices]]. The following example shows how to create, apply, and view a policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130580Arbutus object storage2023-03-02T17:39:23Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Establishing access to your Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
== Accessing your Arbutus Object Store ==<br />
There are two ways to access your data containers:<br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd). Managing your object store such as setting policies cannot be done via web browser and must be done with an S3-compatible client<br />
# if your object storage policies are set to public (not default), object storage is accessible via an HTTPS endpoint:<br />
<br />
= Managing your Arbutus object store =<br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access the Arbutus Object Store. We have [[Accessing_object_storage_with_s3cmd|specific instructions]] on configuring and managing access with the s3cmd client. There are [[Arbutus object storage clients|multiple S3-compatible tools ]] that will work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab in the OpenStack Dashboard at https://arbutus.cloud.computecanada.ca/. This interface refers to "data containers". Data containers are also known as buckets. In the dashboard you can create data containers, upload files, and create directories. Containers can also be created using S3-compatible CLI clients<br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the data containers and their management are up to the user including:<br />
<br />
<!--T:18--><br />
* Data containers are owned by the user who creates them, and no other user can manipulate them.<br />
* With a policy change, you can make a data container accessible to the world via URL<br />
* Data container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing data containers your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/DATA_CONTAINER</code><br />
<br />
== Managing data containers policies for your Arbutus Object Store == <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your data container.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html data container polices]]. The following example shows how to create, apply, and view a policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130579Arbutus object storage2023-03-02T17:38:50Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Establishing access to your Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
= Accessing your Arbutus Object Store =<br />
There are two ways to access your data containers:<br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd). Managing your object store such as setting policies cannot be done via web browser and must be done with an S3-compatible client<br />
# if your object storage policies are set to public (not default), object storage is accessible via an HTTPS endpoint:<br />
<br />
= Managing your Arbutus object store =<br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access the Arbutus Object Store. We have [[Accessing_object_storage_with_s3cmd|specific instructions]] on configuring and managing access with the s3cmd client. There are [[Arbutus object storage clients|multiple S3-compatible tools ]] that will work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab in the OpenStack Dashboard at https://arbutus.cloud.computecanada.ca/. This interface refers to "data containers". Data containers are also known as buckets. In the dashboard you can create data containers, upload files, and create directories. Containers can also be created using S3-compatible CLI clients<br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the data containers and their management are up to the user including:<br />
<br />
<!--T:18--><br />
* Data containers are owned by the user who creates them, and no other user can manipulate them.<br />
* With a policy change, you can make a data container accessible to the world via URL<br />
* Data container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing data containers your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/DATA_CONTAINER</code><br />
<br />
== Managing data containers policies for your Arbutus Object Store == <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your data container.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html data container polices]]. The following example shows how to create, apply, and view a policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130578Arbutus object storage2023-03-02T17:36:29Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Establishing access to your Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
= Managing your Arbutus object store =<br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access the Arbutus Object Store. We have [[Accessing_object_storage_with_s3cmd|specific instructions]] on configuring and managing access with the s3cmd client. There are [[Arbutus object storage clients|multiple S3-compatible tools ]] that will work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab in the OpenStack Dashboard at https://arbutus.cloud.computecanada.ca/. This interface refers to "data containers". Data containers are also known as buckets. In the dashboard you can create data containers, upload files, and create directories. Containers can also be created using S3-compatible CLI clients<br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the data containers and their management are up to the user including:<br />
<br />
<!--T:18--><br />
* Data containers are owned by the user who creates them, and no other user can manipulate them.<br />
* With a policy change, you can make a data container accessible to the world via URL<br />
* Data container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing data containers your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
= Accessing your Arbutus Object Store =<br />
There are two ways to access your data containers:<br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd). Managing your object store such as setting policies cannot be done via web browser and must be done with an S3-compatible client<br />
# if your object storage policies are set to public (not default), object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/DATA_CONTAINER</code><br />
<br />
= Managing data containers policies for your Arbutus Object Store = <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your data container.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html data container polices]]. The following example shows how to create, apply, and view a policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130527Arbutus object storage2023-02-28T18:08:23Z<p>Rmc: numerous fixes including standardizing on data containers vs buckets</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Setting up and configuring access to the Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access the Arbutus Object Store. We have [[Accessing_object_storage_with_s3cmd|specific instructions]] on configuring and managing access with the s3cmd client. There are [[Arbutus object storage clients|multiple S3-compatible tools ]] that will work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab in the OpenStack Dashboard at https://arbutus.cloud.computecanada.ca/. This interface refers to "data containers". Data containers are also known as buckets. In the dashboard you can create data containers, upload files, and create directories. Containers can also be created using S3-compatible CLI clients<br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the data containers and their management are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Data containers are owned by the user who creates them, and no other user can manipulate them.<br />
* With a policy change, you can make a data container accessible to the world via URL<br />
* Data container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing data containers your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
= Accessing your Arbutus Object Store =<br />
There are two ways to access your data containers:<br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd). Managing your object store such as setting policies cannot be done via web browser and must be done with an S3-compatible client<br />
# if your object storage policies are set to public (not default), object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/DATA_CONTAINER</code><br />
<br />
= Managing data containers policies for your Arbutus Object Store = <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your data container.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html data container polices]]. The following example shows how to create, apply, and view a policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130521Arbutus object storage2023-02-28T17:51:11Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Setting up and configuring access to the Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access the Arbutus Object Store. We have [[Accessing_object_storage_with_s3cmd|specific instructions]] on configuring and managing access with the s3cmd client. There are [[Arbutus object storage clients|multiple S3-compatible tools ]] that will work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to data containers (AKA buckets). You can create data containers with this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from it.<br />
* Container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
= Accessing your Arbutus Object Store =<br />
There are two ways to access your data containers/buckets:<br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd). Managing your object store such as setting policies cannot be done via web browser and must be done with an S3-compatible client<br />
# if your object storage policies are set to public (not default), object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/BUCKET</code><br />
<br />
= Managing data containers policies for your Arbutus Object Store = <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your bucket.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130520Arbutus object storage2023-02-28T17:44:04Z<p>Rmc: /* Accessing your Arbutus Object Store */</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Setting up and configuring access to the Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus object storage clients|other tools]] out there that will also work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to data containers (AKA buckets). You can create data containers with this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from it.<br />
* Container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
= Accessing your Arbutus Object Store =<br />
There are two ways to access your data containers/buckets:<br />
# if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd). Managing your object store such as setting policies cannot be done via web browser and must be done with an S3-compatible client<br />
# if your object storage policies are set to public (not default), object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/BUCKET</code><br />
<br />
= Managing data containers policies for your Arbutus Object Store = <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your bucket.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130519Arbutus object storage2023-02-28T17:43:21Z<p>Rmc: /* Accessing your Arbutus Object Store */</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Setting up and configuring access to the Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus object storage clients|other tools]] out there that will also work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to data containers (AKA buckets). You can create data containers with this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from it.<br />
* Container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
= Accessing your Arbutus Object Store =<br />
There are two ways to access your data containers/buckets:<br />
1. if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd). Managing your object store such as setting policies cannot be done via web browser and must be done with an S3-compatible client<br />
1. if your object storage policies are set to public (not default), object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/BUCKET</code><br />
<br />
= Managing data containers policies for your Arbutus Object Store = <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your bucket.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130518Arbutus object storage2023-02-28T17:42:47Z<p>Rmc: /* Accessing your Arbutus Object Store */</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Setting up and configuring access to the Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus object storage clients|other tools]] out there that will also work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to data containers (AKA buckets). You can create data containers with this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from it.<br />
* Container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
= Accessing your Arbutus Object Store =<br />
There are two ways to access your data containers/buckets:<br />
* if your data container policies are set to private (default), object storage is accessible via an [[Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd). Managing your object store such as setting policies cannot be done via web browser and must be done with an S3-compatible client<br />
* if your policies are set to public (not default), object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/BUCKET</code><br />
<br />
= Managing data containers policies for your Arbutus Object Store = <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your bucket.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130517Arbutus object storage2023-02-28T17:42:34Z<p>Rmc: /* Accessing your Arbutus Object Store */</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Setting up and configuring access to the Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus object storage clients|other tools]] out there that will also work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to data containers (AKA buckets). You can create data containers with this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from it.<br />
* Container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
= Accessing your Arbutus Object Store =<br />
There are two ways to access your data containers/buckets:<br />
* if your data container policies are set to private (default), object storage is accessible via an [/Arbutus_object_storage_clients|S3-compatible client]] (e.g. s3cmd). Managing your object store such as setting policies cannot be done via web browser and must be done with an S3-compatible client<br />
* if your policies are set to public (not default), object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/BUCKET</code><br />
<br />
= Managing data containers policies for your Arbutus Object Store = <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your bucket.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130359Arbutus object storage2023-02-27T19:59:44Z<p>Rmc: /* Accessing your Arbutus Object Store */</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Setting up and configuring access to the Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus object storage clients|other tools]] out there that will also work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to data containers (AKA buckets). You can create data containers with this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from it.<br />
* Container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
= Accessing your Arbutus Object Store =<br />
There are two ways to access your data containers/buckets:<br />
* if your data container policies are set to private (default), object storage is accessible via an S3 client (e.g. s3cmd)<br />
* if your policies are set to public (not default), object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>https://object-arbutus.cloud.computecanada.ca:443/BUCKET</code><br />
<br />
= Managing data containers policies for your Arbutus Object Store = <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your bucket.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130356Arbutus object storage2023-02-27T19:59:19Z<p>Rmc: removed stray line</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Setting up and configuring access to the Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus object storage clients|other tools]] out there that will also work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to data containers (AKA buckets). You can create data containers with this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from it.<br />
* Container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
= Accessing your Arbutus Object Store =<br />
There are two ways to access your data containers/buckets:<br />
* if your data container policies are set to private (default), object storage is accessible via an S3 client (e.g. s3cmd)<br />
* if your policies are set to public (not default), object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
= Managing data containers policies for your Arbutus Object Store = <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your bucket.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130354Arbutus object storage2023-02-27T19:57:45Z<p>Rmc: removed duplicate code line</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Setting up and configuring access to the Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus object storage clients|other tools]] out there that will also work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to data containers (AKA buckets). You can create data containers with this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from it.<br />
* Container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
= Accessing your Arbutus Object Store =<br />
There are two ways to access your data containers/buckets:<br />
* if your data container policies are set to private (default), object storage is accessible via an S3 client (e.g. s3cmd)<br />
* if your policies are set to public (not default), object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<br />
ucket policies for your Arbutus Object Store <br />
<br />
= Managing data containers policies for your Arbutus Object Store = <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your bucket.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=130352Arbutus object storage2023-02-27T19:56:58Z<p>Rmc: Removed s3cmd details which are going to the new s3cmd page Sarah is creating except for the examples at the bottom where the s3cmd command is used to apply a policy.</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a service that manages data as objects. This is different from other storage architectures where data is managed in a file hierarchy. Objects can be created, replaced, or deleted, but unlike traditional storage, they cannot be edited in place. Object storage has become popular due to its ability to handle large files and large numbers of files, and due to the prevalence of compatible tools.<br />
<br />
<!--T:28--><br />
Unlike other storage types, a unit of data or ''object'' is managed as a whole, and the information within it cannot be modified in place. Objects are stored in containers in the object store. The containers are stored in a way that makes them easier and often faster to access than in a traditional filesystem.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly as a whole and mostly read-only; and have simplified access-control rules. We recommend using it with software or platforms that are designed to work with data living in an object store.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of object storage. If more is required, you can either request an additional 9 TB available through our [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. More than 10TB must be requested and allocated under the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a project's containers are managed by that user, which includes operations like [[Backing up your VM|backups]]. For more information about differences between object storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the OpenStack Object Store via two different protocols: Swift or Amazon Simple Storage Service (S3).<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as object storage containers and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of the Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is the default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionalities of S3. The main use case here is that when you want to manage your object storage containers using access policies, you must use S3, as Swift does not support access policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:<br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Setting up and configuring access to the Arbutus object store = <!--T:8--><br />
<br />
<!--T:13--><br />
In order to manage your Arbutus Object store, you will need your own storage access ID and secret key. To generate these, use the [[OpenStack command line clients|OpenStack command line client]]:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <code>s3cmd</code> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus object storage clients|other tools]] out there that will also work.<br />
<br />
<!--T:10--><br />
You can also perform some management tasks for your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to data containers (AKA buckets). You can create data containers with this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the Internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside your cloud project. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from it.<br />
* Container names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a container named ''test'', but ''def-myname-test'' is probably OK.<br />
* Container policies are managed via json files.<br />
<br />
= Accessing your Arbutus Object Store =<br />
There are two ways to access your data containers/buckets:<br />
* if your data container policies are set to private (default), object storage is accessible via an S3 client (e.g. s3cmd)<br />
* if your policies are set to public (not default), object storage is accessible via an HTTPS endpoint:<br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<br />
ucket policies for your Arbutus Object Store <br />
<br />
= Managing data containers policies for your Arbutus Object Store = <!--T:31--><br />
{{Warning<br />
|title=Attention<br />
|content=<br />
Be careful with policies because an ill-conceived policy can lock you out of your bucket.<br />
}}<br />
<br />
<!--T:34--><br />
Currently, Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=127007Arbutus object storage2023-01-26T15:41:45Z<p>Rmc: warning template not working so switching Warning to bold</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.<br />
<br />
<!--T:28--><br />
An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a user's Object Storage buckets are managed solely by that user. This means that operations like [[Backing up your VM|backups]] must be managed by the user. For more information about differences between Object Storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the Object Store via two different protocols: Swift or S3.<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Accessing and managing Object Store = <!--T:8--><br />
<br />
<!--T:10--><br />
You can manage your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers (not to be confused with containers based on namespace functionality of the Linux kernel). You can create containers (AKA buckets) in this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:12--><br />
You can also use the OpenStack command line client.<br />
For instructions on how to install and operate the OpenStack command line clients, see [[OpenStack Command Line Clients]].<br />
<br />
<!--T:13--><br />
To generate your own S3 access ID and secret key for the S3 protocol, use the OpenStack command line client:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <tt>s3cmd</tt> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside the ''tenant''. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.<br />
* Bucket names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named ''test'', but ''def-myname-test'' is probably OK.<br />
* Bucket policies are managed via json files.<br />
<br />
= Connection details and s3cmd Configuration = <!--T:19--><br />
<br />
<!--T:20--><br />
Object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:22--><br />
The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<!--T:23--><br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
<!--T:24--><br />
Using s3cmd's <code>--configure</code> feature is [[Arbutus_Object_Storage_Clients#Configuring_s3cmd | described here]].<br />
<br />
= Example operations on a bucket = <!--T:25--><br />
<br />
<!--T:26--><br />
<ul><br />
<li><p>Make a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>View the configuration of a bucket:</p><br />
<p><code>s3cmd info s3://testbucket</code></p></li><br />
</ul><br />
<br />
= Bucket policies = <!--T:31--><br />
'''Warning''': Be careful with policies because an ill-conceived policy can lock you out of your bucket.<br />
<br />
Currently Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus Cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=127006Arbutus object storage2023-01-26T15:39:37Z<p>Rmc: fixed the warning markdown</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.<br />
<br />
<!--T:28--><br />
An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a user's Object Storage buckets are managed solely by that user. This means that operations like [[Backing up your VM|backups]] must be managed by the user. For more information about differences between Object Storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the Object Store via two different protocols: Swift or S3.<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Accessing and managing Object Store = <!--T:8--><br />
<br />
<!--T:10--><br />
You can manage your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers (not to be confused with containers based on namespace functionality of the Linux kernel). You can create containers (AKA buckets) in this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:12--><br />
You can also use the OpenStack command line client.<br />
For instructions on how to install and operate the OpenStack command line clients, see [[OpenStack Command Line Clients]].<br />
<br />
<!--T:13--><br />
To generate your own S3 access ID and secret key for the S3 protocol, use the OpenStack command line client:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <tt>s3cmd</tt> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside the ''tenant''. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.<br />
* Bucket names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named ''test'', but ''def-myname-test'' is probably OK.<br />
* Bucket policies are managed via json files.<br />
<br />
= Connection details and s3cmd Configuration = <!--T:19--><br />
<br />
<!--T:20--><br />
Object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:22--><br />
The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<!--T:23--><br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
<!--T:24--><br />
Using s3cmd's <code>--configure</code> feature is [[Arbutus_Object_Storage_Clients#Configuring_s3cmd | described here]].<br />
<br />
= Example operations on a bucket = <!--T:25--><br />
<br />
<!--T:26--><br />
<ul><br />
<li><p>Make a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>View the configuration of a bucket:</p><br />
<p><code>s3cmd info s3://testbucket</code></p></li><br />
</ul><br />
<br />
= Bucket policies = <!--T:31--><br />
{{warning}} Be careful with policies because an ill-conceived policy can lock you out of your bucket.<br />
<br />
Currently Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus Cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Cloud_shared_security_responsibility_model&diff=126986Cloud shared security responsibility model2023-01-25T22:59:16Z<p>Rmc: removed references to Compute Canada and broken link to the Terms of Use</p>
<hr />
<div><languages /><br />
<br />
<translate><br />
<br />
<!--T:7--><br />
Canada’s advanced research computing environment includes several cloud platforms for research. This document’s purpose is to describe the responsibilities of the cloud teams who administer our cloud platforms, the responsibilities of the many research teams who use these platforms, and shared responsibilities between both. “Security in the cloud” is the responsibility of our research teams. “Security of the cloud” is the responsibility of our our cloud teams.<br />
[[File:Cloud_shared_security_responsibility_model.png|600px|thumb|center| Cloud shared security responsibility model (Click for larger image)]]<br />
<br />
==Research team responsibilities: security in the cloud== <!--T:3--><br />
Research teams are responsible for security controls to protect the confidentiality, integrity, and availability of their research data. Each team is responsible for installing, configuring, and managing their virtual machines, as well as their operating systems, services, applications. They must [[Security_considerations_when_running_a_VM#Updating_your_VM|apply updates]] and security patches on a timely basis. They must configure security group rules that limit the services exposed to the Internet. They must ensure backup and recovery procedures are implemented and tested. They must ensure the [https://en.wikipedia.org/wiki/Principle_of_least_privilege principle of least privilege] is followed when granting access.<br />
<br />
==Cloud team responsibilities: security of the cloud== <!--T:4--><br />
Cloud Teams are responsible for protecting our cloud platforms. They are responsible for configuring and managing these compute, storage, database, and networking capabilities. They must apply updates and security patches applicable to the cloud platform on a timely basis. The environmental and physical security of the cloud infrastructure is also their responsibility. <br />
<br />
<!--T:8--><br />
Our cloud teams do not support or manage virtual machines. However, if a virtual machine is adversely impacting others, it may be shut down and locked by a cloud team. In these cases the research team may be asked to provide remediation plans before access to the virtual machine is restored. This is so that others are protected.<br />
<br />
==Shared responsibilities== <!--T:5--><br />
Compliance is a shared responsibility between our cloud teams and the research teams using our cloud services. Everyone is responsible to comply with applicable laws, policies, procedures, and contracts. Alliance Federation and institutional policy compliance is required, particularly with respect to the [https://www.computecanada.ca/research-portal/information-security/terms-of-use/ Terms of Use]. Being good “net citizens” will protect the reputation of our networks and prevent all of us from being blocked or banned.<br />
<br />
<!--T:9--><br />
If you have any questions about this model please contact cloud@computecanada.ca.<br />
<br />
==Further resources== <!--T:6--><br />
For more information please see the following resources:<br />
* [[Cloud|Alliance Federation’s cloud service description]]<br />
* [[Security_considerations_when_running_a_VM|Cloud security considerations for research teams]]<br />
* [https://alliancecan.ca/sites/default/files/2022-03/1-terms-of-use.pdf Alliance Federation’s Terms of Use]<br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Cloud_storage_options&diff=126885Cloud storage options2023-01-24T14:08:30Z<p>Rmc: more clairifications to backups</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
<!--T:1--><br />
The existing storage types available in our clouds are:<br />
<br />
<!--T:2--><br />
* '''[[Working_with_volumes | Volume storage]]''': The standard storage unit for cloud computing; can be attached to and detached from an instance. <br />
* '''Ephemeral/Disk storage''': Virtual local disk storage tied to the lifecycle of a single instance.<br />
* '''[[ Arbutus_Object_Storage | Object storage]]''': Non-hierarchical storage where data is created or uploaded in whole file-form<br />
* '''[[Arbutus_CephFS | Shared filesystem storage]]''': Storage in the cloud shared filesystem; must be configured on each instance where it is mounted.<br />
<br />
<!--T:3--><br />
Attributes of each storage type are compared in the following table:<br />
<br />
<!--T:4--><br />
{| class="wikitable sortable"<br />
! Attribute !! Volume storage !! Ephemeral/Disk storage !! Object storage !! Shared filesystem storage <br />
|-<br />
| Default storage option || Yes || Yes || No || No<br />
|-<br />
| Can be accessed via webbrowser || No || No || Yes || No <br />
|-<br />
| Access can be restricted by source IP || Yes || Yes || Yes (S3 ACL) || Yes <br />
|-<br />
| Can be mounted on a single VM || Yes || Yes || No || Yes <br />
|-<br />
| Can be mounted on multiple VMs (and across projects) simultaneously || No || No || No || Yes <br />
|-<br />
| Automatic backups || No (Yes with snapshots) || No || No || Yes (nightly to TSM)<br />
|-<br />
| Suitable for write once, read only, and public access || No || No || Yes || No <br />
|-<br />
| Suitable for data/files that change frequently || Yes || Yes || No || Yes<br />
|-<br />
| Hierarchical filesystem || Yes || Yes || No || Yes <br />
|-<br />
| Suitable for long-term storage || Yes || No || Yes || No <br />
|-<br />
| Deleted automatically upon deletion of VM || No || Yes || No || No <br />
|- <br />
| Standard magnitude of allocation || GB || GB || TB || TB <br />
|- <br />
|}<br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Cloud_storage_options&diff=126884Cloud storage options2023-01-24T14:07:31Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
<!--T:1--><br />
The existing storage types available in our clouds are:<br />
<br />
<!--T:2--><br />
* '''[[Working_with_volumes | Volume storage]]''': The standard storage unit for cloud computing; can be attached to and detached from an instance. <br />
* '''Ephemeral/Disk storage''': Virtual local disk storage tied to the lifecycle of a single instance.<br />
* '''[[ Arbutus_Object_Storage | Object storage]]''': Non-hierarchical storage where data is created or uploaded in whole file-form<br />
* '''[[Arbutus_CephFS | Shared filesystem storage]]''': Storage in the cloud shared filesystem; must be configured on each instance where it is mounted.<br />
<br />
<!--T:3--><br />
Attributes of each storage type are compared in the following table:<br />
<br />
<!--T:4--><br />
{| class="wikitable sortable"<br />
! Attribute !! Volume storage !! Ephemeral/Disk storage !! Object storage !! Shared filesystem storage <br />
|-<br />
| Default storage option || Yes || Yes || No || No<br />
|-<br />
| Can be accessed via webbrowser || No || No || Yes || No <br />
|-<br />
| Access can be restricted by source IP || Yes || Yes || Yes (S3 ACL) || Yes <br />
|-<br />
| Can be mounted on a single VM || Yes || Yes || No || Yes <br />
|-<br />
| Can be mounted on multiple VMs (and across projects) simultaneously || No || No || No || Yes <br />
|-<br />
| Backed up || No (yes with snapshots) || No || No || Yes (nightly to TSM)<br />
|-<br />
| Suitable for write once, read only, and public access || No || No || Yes || No <br />
|-<br />
| Suitable for data/files that change frequently || Yes || Yes || No || Yes<br />
|-<br />
| Hierarchical filesystem || Yes || Yes || No || Yes <br />
|-<br />
| Suitable for long-term storage || Yes || No || Yes || No <br />
|-<br />
| Deleted automatically upon deletion of VM || No || Yes || No || No <br />
|- <br />
| Standard magnitude of allocation || GB || GB || TB || TB <br />
|- <br />
|}<br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Cloud_storage_options&diff=126883Cloud storage options2023-01-24T14:05:25Z<p>Rmc: added snapshots to volumes</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
<!--T:1--><br />
The existing storage types available in our clouds are:<br />
<br />
<!--T:2--><br />
* '''[[Working_with_volumes | Volume storage]]''': The standard storage unit for cloud computing; can be attached to and detached from an instance. <br />
* '''Ephemeral/Disk storage''': Virtual local disk storage tied to the lifecycle of a single instance.<br />
* '''[[ Arbutus_Object_Storage | Object storage]]''': Non-hierarchical storage where data is created or uploaded in whole file-form<br />
* '''[[Arbutus_CephFS | Shared filesystem storage]]''': Storage in the cloud shared filesystem; must be configured on each instance where it is mounted.<br />
<br />
<!--T:3--><br />
Attributes of each storage type are compared in the following table:<br />
<br />
<!--T:4--><br />
{| class="wikitable sortable"<br />
! Attribute !! Volume storage !! Ephemeral/Disk storage !! Object storage !! Shared filesystem storage <br />
|-<br />
| Default storage option || Yes || Yes || No || No<br />
|-<br />
| Can be accessed via webbrowser || No || No || Yes || No <br />
|-<br />
| Access can be restricted by source IP || Yes || Yes || Yes (S3 ACL) || Yes <br />
|-<br />
| Can be mounted on a single VM || Yes || Yes || No || Yes <br />
|-<br />
| Can be mounted on multiple VMs (and across projects) simultaneously || No || No || No || Yes <br />
|-<br />
| Backed up || Yes (with manually created snapshots) || No || No || Yes <br />
|-<br />
| Suitable for write once, read only, and public access || No || No || Yes || No <br />
|-<br />
| Suitable for data/files that change frequently || Yes || Yes || No || Yes<br />
|-<br />
| Hierarchical filesystem || Yes || Yes || No || Yes <br />
|-<br />
| Suitable for long-term storage || Yes || No || Yes || No <br />
|-<br />
| Deleted automatically upon deletion of VM || No || Yes || No || No <br />
|- <br />
| Standard magnitude of allocation || GB || GB || TB || TB <br />
|- <br />
|}<br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=126858Arbutus object storage2023-01-21T20:19:22Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.<br />
<br />
<!--T:28--><br />
An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a user's Object Storage buckets are managed solely by that user. This means that operations like [[Backing up your VM|backups]] must be managed by the user. For more information about differences between Object Storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the Object Store via two different protocols: Swift or S3.<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Accessing and managing Object Store = <!--T:8--><br />
<br />
<!--T:10--><br />
You can manage your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers (not to be confused with containers based on namespace functionality of the Linux kernel). You can create containers (AKA buckets) in this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:12--><br />
You can also use the OpenStack command line client.<br />
For instructions on how to install and operate the OpenStack command line clients, see [[OpenStack Command Line Clients]].<br />
<br />
<!--T:13--><br />
To generate your own S3 access ID and secret key for the S3 protocol, use the OpenStack command line client:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <tt>s3cmd</tt> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside the ''tenant''. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.<br />
* Bucket names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named ''test'', but ''def-myname-test'' is probably OK.<br />
* Bucket policies are managed via json files.<br />
<br />
= Connection details and s3cmd Configuration = <!--T:19--><br />
<br />
<!--T:20--><br />
Object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:22--><br />
The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<!--T:23--><br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
<!--T:24--><br />
Using s3cmd's <code>--configure</code> feature is [[Arbutus_Object_Storage_Clients#Configuring_s3cmd | described here]].<br />
<br />
= Example operations on a bucket = <!--T:25--><br />
<br />
<!--T:26--><br />
<ul><br />
<li><p>Make a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>View the configuration of a bucket:</p><br />
<p><code>s3cmd info s3://testbucket</code></p></li><br />
</ul><br />
<br />
= Bucket policies = <!--T:31--><br />
{warning:title=ATTENTION:}<br />
Be careful with policies because an ill-conceived policy can lock you out of your bucket.<br />
{warning}<br />
Currently Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus Cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=126857Arbutus object storage2023-01-21T20:16:12Z<p>Rmc: fixed warning markup</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.<br />
<br />
<!--T:28--><br />
An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a user's Object Storage buckets are managed solely by that user. This means that operations like [[Backing up your VM|backups]] must be managed by the user. For more information about differences between Object Storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the Object Store via two different protocols: Swift or S3.<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Accessing and managing Object Store = <!--T:8--><br />
<br />
<!--T:10--><br />
You can manage your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers (not to be confused with containers based on namespace functionality of the Linux kernel). You can create containers (AKA buckets) in this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:12--><br />
You can also use the OpenStack command line client.<br />
For instructions on how to install and operate the OpenStack command line clients, see [[OpenStack Command Line Clients]].<br />
<br />
<!--T:13--><br />
To generate your own S3 access ID and secret key for the S3 protocol, use the OpenStack command line client:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <tt>s3cmd</tt> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside the ''tenant''. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.<br />
* Bucket names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named ''test'', but ''def-myname-test'' is probably OK.<br />
* Bucket policies are managed via json files.<br />
<br />
= Connection details and s3cmd Configuration = <!--T:19--><br />
<br />
<!--T:20--><br />
Object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:22--><br />
The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<!--T:23--><br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
<!--T:24--><br />
Using s3cmd's <code>--configure</code> feature is [[Arbutus_Object_Storage_Clients#Configuring_s3cmd | described here]].<br />
<br />
= Example operations on a bucket = <!--T:25--><br />
<br />
<!--T:26--><br />
<ul><br />
<li><p>Make a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>View the configuration of a bucket:</p><br />
<p><code>s3cmd info s3://testbucket</code></p></li><br />
</ul><br />
<br />
= Bucket policies = <!--T:31--><br />
{{Warning |heading=Warning|Be careful because an ill-conceived policy can lock you out of your bucket}}<br />
Currently Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus Cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=126856Arbutus object storage2023-01-21T20:14:27Z<p>Rmc: add warning</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.<br />
<br />
<!--T:28--><br />
An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a user's Object Storage buckets are managed solely by that user. This means that operations like [[Backing up your VM|backups]] must be managed by the user. For more information about differences between Object Storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the Object Store via two different protocols: Swift or S3.<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Accessing and managing Object Store = <!--T:8--><br />
<br />
<!--T:10--><br />
You can manage your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers (not to be confused with containers based on namespace functionality of the Linux kernel). You can create containers (AKA buckets) in this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:12--><br />
You can also use the OpenStack command line client.<br />
For instructions on how to install and operate the OpenStack command line clients, see [[OpenStack Command Line Clients]].<br />
<br />
<!--T:13--><br />
To generate your own S3 access ID and secret key for the S3 protocol, use the OpenStack command line client:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <tt>s3cmd</tt> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside the ''tenant''. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.<br />
* Bucket names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named ''test'', but ''def-myname-test'' is probably OK.<br />
* Bucket policies are managed via json files.<br />
<br />
= Connection details and s3cmd Configuration = <!--T:19--><br />
<br />
<!--T:20--><br />
Object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:22--><br />
The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<!--T:23--><br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
<!--T:24--><br />
Using s3cmd's <code>--configure</code> feature is [[Arbutus_Object_Storage_Clients#Configuring_s3cmd | described here]].<br />
<br />
= Example operations on a bucket = <!--T:25--><br />
<br />
<!--T:26--><br />
<ul><br />
<li><p>Make a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>View the configuration of a bucket:</p><br />
<p><code>s3cmd info s3://testbucket</code></p></li><br />
</ul><br />
<br />
= Bucket policies = <!--T:31--><br />
{{Warning|1=An ill-conceived policy can lock you out of your bucket. |heading=Warning}}<br />
Currently Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus Cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=126855Arbutus object storage2023-01-21T20:05:37Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.<br />
<br />
<!--T:28--><br />
An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a user's Object Storage buckets are managed solely by that user. This means that operations like [[Backing up your VM|backups]] must be managed by the user. For more information about differences between Object Storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the Object Store via two different protocols: Swift or S3.<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Accessing and managing Object Store = <!--T:8--><br />
<br />
<!--T:10--><br />
You can manage your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers (not to be confused with containers based on namespace functionality of the Linux kernel). You can create containers (AKA buckets) in this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:12--><br />
You can also use the OpenStack command line client.<br />
For instructions on how to install and operate the OpenStack command line clients, see [[OpenStack Command Line Clients]].<br />
<br />
<!--T:13--><br />
To generate your own S3 access ID and secret key for the S3 protocol, use the OpenStack command line client:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <tt>s3cmd</tt> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside the ''tenant''. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.<br />
* Bucket names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named ''test'', but ''def-myname-test'' is probably OK.<br />
* Bucket policies are managed via json files.<br />
<br />
= Connection details and s3cmd Configuration = <!--T:19--><br />
<br />
<!--T:20--><br />
Object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:22--><br />
The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<!--T:23--><br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
<!--T:24--><br />
Using s3cmd's <code>--configure</code> feature is [[Arbutus_Object_Storage_Clients#Configuring_s3cmd | described here]].<br />
<br />
= Example operations on a bucket = <!--T:25--><br />
<br />
<!--T:26--><br />
<ul><br />
<li><p>Make a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>View the configuration of a bucket:</p><br />
<p><code>s3cmd info s3://testbucket</code></p></li><br />
</ul><br />
<br />
= Bucket policies = <!--T:31--><br />
Currently Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP address ranges in Classless Inter-Domain Routing (CIDR) notation. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus Cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<!--T:32--><br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<!--T:33--><br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=126830Arbutus object storage2023-01-20T19:29:54Z<p>Rmc: /* Bucket policies */</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.<br />
<br />
<!--T:28--><br />
An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a user's Object Storage buckets are managed solely by that user. This means that operations like [[Backing up your VM|backups]] must be managed by the user. For more information about differences between Object Storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the Object Store via two different protocols: Swift or S3.<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Accessing and managing Object Store = <!--T:8--><br />
<br />
<!--T:10--><br />
You can manage your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers (not to be confused with containers based on namespace functionality of the Linux kernel). You can create containers (AKA buckets) in this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:12--><br />
You can also use the OpenStack command line client.<br />
For instructions on how to install and operate the OpenStack command line clients, see [[OpenStack Command Line Clients]].<br />
<br />
<!--T:13--><br />
To generate your own S3 access ID and secret key for the S3 protocol, use the OpenStack command line client:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <tt>s3cmd</tt> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside the ''tenant''. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.<br />
* Bucket names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named ''test'', but ''def-myname-test'' is probably OK.<br />
* Bucket policies are managed via json files.<br />
<br />
= Connection details and s3cmd Configuration = <!--T:19--><br />
<br />
<!--T:20--><br />
Object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:22--><br />
The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<!--T:23--><br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
<!--T:24--><br />
Using s3cmd's <code>--configure</code> feature is [[Arbutus_Object_Storage_Clients#Configuring_s3cmd | described here]].<br />
<br />
= Example operations on a bucket = <!--T:25--><br />
<br />
<!--T:26--><br />
<ul><br />
<li><p>Make a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>View the configuration of a bucket:</p><br />
<p><code>s3cmd info s3://testbucket --acl-private</code></p></li><br />
</ul><br />
<br />
= Bucket policies =<br />
Currently Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP addresses. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus Cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=126829Arbutus object storage2023-01-20T19:28:00Z<p>Rmc: /* Bucket policies */</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.<br />
<br />
<!--T:28--><br />
An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a user's Object Storage buckets are managed solely by that user. This means that operations like [[Backing up your VM|backups]] must be managed by the user. For more information about differences between Object Storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the Object Store via two different protocols: Swift or S3.<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Accessing and managing Object Store = <!--T:8--><br />
<br />
<!--T:10--><br />
You can manage your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers (not to be confused with containers based on namespace functionality of the Linux kernel). You can create containers (AKA buckets) in this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:12--><br />
You can also use the OpenStack command line client.<br />
For instructions on how to install and operate the OpenStack command line clients, see [[OpenStack Command Line Clients]].<br />
<br />
<!--T:13--><br />
To generate your own S3 access ID and secret key for the S3 protocol, use the OpenStack command line client:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <tt>s3cmd</tt> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside the ''tenant''. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.<br />
* Bucket names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named ''test'', but ''def-myname-test'' is probably OK.<br />
* Bucket policies are managed via json files.<br />
<br />
= Connection details and s3cmd Configuration = <!--T:19--><br />
<br />
<!--T:20--><br />
Object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:22--><br />
The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<!--T:23--><br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
<!--T:24--><br />
Using s3cmd's <code>--configure</code> feature is [[Arbutus_Object_Storage_Clients#Configuring_s3cmd | described here]].<br />
<br />
= Example operations on a bucket = <!--T:25--><br />
<br />
<!--T:26--><br />
<ul><br />
<li><p>Make a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>View the configuration of a bucket:</p><br />
<p><code>s3cmd info s3://testbucket --acl-private</code></p></li><br />
</ul><br />
<br />
= Bucket policies =<br />
Currently Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html|bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example denies access except from the specified source IP addresses. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus Cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=126828Arbutus object storage2023-01-20T19:26:56Z<p>Rmc: /* Bucket policies */</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.<br />
<br />
<!--T:28--><br />
An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a user's Object Storage buckets are managed solely by that user. This means that operations like [[Backing up your VM|backups]] must be managed by the user. For more information about differences between Object Storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the Object Store via two different protocols: Swift or S3.<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Accessing and managing Object Store = <!--T:8--><br />
<br />
<!--T:10--><br />
You can manage your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers (not to be confused with containers based on namespace functionality of the Linux kernel). You can create containers (AKA buckets) in this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:12--><br />
You can also use the OpenStack command line client.<br />
For instructions on how to install and operate the OpenStack command line clients, see [[OpenStack Command Line Clients]].<br />
<br />
<!--T:13--><br />
To generate your own S3 access ID and secret key for the S3 protocol, use the OpenStack command line client:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <tt>s3cmd</tt> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside the ''tenant''. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.<br />
* Bucket names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named ''test'', but ''def-myname-test'' is probably OK.<br />
* Bucket policies are managed via json files.<br />
<br />
= Connection details and s3cmd Configuration = <!--T:19--><br />
<br />
<!--T:20--><br />
Object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:22--><br />
The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<!--T:23--><br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
<!--T:24--><br />
Using s3cmd's <code>--configure</code> feature is [[Arbutus_Object_Storage_Clients#Configuring_s3cmd | described here]].<br />
<br />
= Example operations on a bucket = <!--T:25--><br />
<br />
<!--T:26--><br />
<ul><br />
<li><p>Make a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>View the configuration of a bucket:</p><br />
<p><code>s3cmd info s3://testbucket --acl-private</code></p></li><br />
</ul><br />
<br />
= Bucket policies =<br />
Currently Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html|bucket polices]]. The following example shows how to create, apply, and view a bucket's policy. The first step is create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example allows you to limit users of that bucket from certain source IP addresses. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus Cloud and the public IP address range (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=126827Arbutus object storage2023-01-20T19:25:37Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.<br />
<br />
<!--T:28--><br />
An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a user's Object Storage buckets are managed solely by that user. This means that operations like [[Backing up your VM|backups]] must be managed by the user. For more information about differences between Object Storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the Object Store via two different protocols: Swift or S3.<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Accessing and managing Object Store = <!--T:8--><br />
<br />
<!--T:10--><br />
You can manage your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers (not to be confused with containers based on namespace functionality of the Linux kernel). You can create containers (AKA buckets) in this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:12--><br />
You can also use the OpenStack command line client.<br />
For instructions on how to install and operate the OpenStack command line clients, see [[OpenStack Command Line Clients]].<br />
<br />
<!--T:13--><br />
To generate your own S3 access ID and secret key for the S3 protocol, use the OpenStack command line client:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <tt>s3cmd</tt> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside the ''tenant''. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.<br />
* Bucket names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named ''test'', but ''def-myname-test'' is probably OK.<br />
* Bucket policies are managed via json files.<br />
<br />
= Connection details and s3cmd Configuration = <!--T:19--><br />
<br />
<!--T:20--><br />
Object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:22--><br />
The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<!--T:23--><br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
<!--T:24--><br />
Using s3cmd's <code>--configure</code> feature is [[Arbutus_Object_Storage_Clients#Configuring_s3cmd | described here]].<br />
<br />
= Example operations on a bucket = <!--T:25--><br />
<br />
<!--T:26--><br />
<ul><br />
<li><p>Make a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>View the configuration of a bucket:</p><br />
<p><code>s3cmd info s3://testbucket --acl-private</code></p></li><br />
</ul><br />
<br />
= Bucket policies =<br />
Currently Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html|bucket polices]]. <br />
The following example shows how to create, apply, and view a bucket's policy.<br />
<br />
<p>You need to first create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example allows you to limit users of that bucket from certain source IP addresses. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus Cloud and the public IP address ranged (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=126826Arbutus object storage2023-01-20T19:25:25Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.<br />
<br />
<!--T:28--><br />
An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a user's Object Storage buckets are managed solely by that user. This means that operations like [[Backing up your VM|backups]] must be managed by the user. For more information about differences between Object Storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the Object Store via two different protocols: Swift or S3.<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Accessing and managing Object Store = <!--T:8--><br />
<br />
<!--T:10--><br />
You can manage your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers (not to be confused with containers based on namespace functionality of the Linux kernel). You can create containers (AKA buckets) in this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:12--><br />
You can also use the OpenStack command line client.<br />
For instructions on how to install and operate the OpenStack command line clients, see [[OpenStack Command Line Clients]].<br />
<br />
<!--T:13--><br />
To generate your own S3 access ID and secret key for the S3 protocol, use the OpenStack command line client:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <tt>s3cmd</tt> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside the ''tenant''. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.<br />
* Bucket names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named ''test'', but ''def-myname-test'' is probably OK.<br />
* Bucket policies are managed via json files.<br />
<br />
= Connection details and s3cmd Configuration = <!--T:19--><br />
<br />
<!--T:20--><br />
Object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:22--><br />
The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<!--T:23--><br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
<!--T:24--><br />
Using s3cmd's <code>--configure</code> feature is [[Arbutus_Object_Storage_Clients#Configuring_s3cmd | described here]].<br />
<br />
= Example operations on a bucket = <!--T:25--><br />
<br />
<!--T:26--><br />
<ul><br />
<li><p>Make a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
<li><p>Views the configuration of a bucket:</p><br />
<p><code>s3cmd info s3://testbucket --acl-private</code></p></li><br />
</ul><br />
<br />
= Bucket policies =<br />
Currently Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html|bucket polices]]. <br />
The following example shows how to create, apply, and view a bucket's policy.<br />
<br />
<p>You need to first create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example allows you to limit users of that bucket from certain source IP addresses. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus Cloud and the public IP address ranged (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=126825Arbutus object storage2023-01-20T19:24:43Z<p>Rmc: /* Bucket policies */</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.<br />
<br />
<!--T:28--><br />
An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a user's Object Storage buckets are managed solely by that user. This means that operations like [[Backing up your VM|backups]] must be managed by the user. For more information about differences between Object Storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the Object Store via two different protocols: Swift or S3.<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Accessing and managing Object Store = <!--T:8--><br />
<br />
<!--T:10--><br />
You can manage your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers (not to be confused with containers based on namespace functionality of the Linux kernel). You can create containers (AKA buckets) in this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:12--><br />
You can also use the OpenStack command line client.<br />
For instructions on how to install and operate the OpenStack command line clients, see [[OpenStack Command Line Clients]].<br />
<br />
<!--T:13--><br />
To generate your own S3 access ID and secret key for the S3 protocol, use the OpenStack command line client:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <tt>s3cmd</tt> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside the ''tenant''. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.<br />
* Bucket names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named ''test'', but ''def-myname-test'' is probably OK.<br />
* Bucket policies are managed via json files.<br />
<br />
= Connection details and s3cmd Configuration = <!--T:19--><br />
<br />
<!--T:20--><br />
Object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:22--><br />
The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<!--T:23--><br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
<!--T:24--><br />
Using s3cmd's <code>--configure</code> feature is [[Arbutus_Object_Storage_Clients#Configuring_s3cmd | described here]].<br />
<br />
= Example operations on a bucket = <!--T:25--><br />
<br />
<!--T:26--><br />
<ul><br />
<li><p>Make a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
</ul><br />
<br />
= Bucket policies =<br />
Currently Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html|bucket polices]]. <br />
The following example shows how to create, apply, and view a bucket's policy.<br />
<br />
<p>You need to first create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This example allows you to limit users of that bucket from certain source IP addresses. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus Cloud and the public IP address ranged (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=126824Arbutus object storage2023-01-20T19:23:22Z<p>Rmc: /* Bucket policies */</p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.<br />
<br />
<!--T:28--><br />
An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a user's Object Storage buckets are managed solely by that user. This means that operations like [[Backing up your VM|backups]] must be managed by the user. For more information about differences between Object Storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the Object Store via two different protocols: Swift or S3.<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Accessing and managing Object Store = <!--T:8--><br />
<br />
<!--T:10--><br />
You can manage your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers (not to be confused with containers based on namespace functionality of the Linux kernel). You can create containers (AKA buckets) in this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:12--><br />
You can also use the OpenStack command line client.<br />
For instructions on how to install and operate the OpenStack command line clients, see [[OpenStack Command Line Clients]].<br />
<br />
<!--T:13--><br />
To generate your own S3 access ID and secret key for the S3 protocol, use the OpenStack command line client:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <tt>s3cmd</tt> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside the ''tenant''. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.<br />
* Bucket names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named ''test'', but ''def-myname-test'' is probably OK.<br />
* Bucket policies are managed via json files.<br />
<br />
= Connection details and s3cmd Configuration = <!--T:19--><br />
<br />
<!--T:20--><br />
Object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:22--><br />
The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<!--T:23--><br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
<!--T:24--><br />
Using s3cmd's <code>--configure</code> feature is [[Arbutus_Object_Storage_Clients#Configuring_s3cmd | described here]].<br />
<br />
= Example operations on a bucket = <!--T:25--><br />
<br />
<!--T:26--><br />
<ul><br />
<li><p>Make a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
</ul><br />
<br />
= Bucket policies =<br />
Currently Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html|bucket polices]]. <br />
<br />
<p>Example bucket policy:</p><br />
<p>You need to first create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This file allows you to limit users of that bucket from certain source IP addresses. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus Cloud and the public IP address ranged (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmchttps://docs.alliancecan.ca/mediawiki/index.php?title=Arbutus_object_storage&diff=126823Arbutus object storage2023-01-20T19:22:19Z<p>Rmc: </p>
<hr />
<div><languages /><br />
<translate><br />
<br />
= Introduction = <!--T:1--><br />
<br />
<!--T:27--><br />
Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.<br />
<br />
<!--T:28--><br />
An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.<br />
<br />
<!--T:29--><br />
The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.<br />
<br />
<!--T:2--><br />
All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation. <br />
<br />
<!--T:30--><br />
Unlike a cluster computing environment, system administration for a user's Object Storage buckets are managed solely by that user. This means that operations like [[Backing up your VM|backups]] must be managed by the user. For more information about differences between Object Storage and other cloud storage types, see [[Cloud storage options]].<br />
<br />
<!--T:3--><br />
We offer access to the Object Store via two different protocols: Swift or S3.<br />
<br />
<!--T:5--><br />
These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.<br />
<br />
<!--T:6--><br />
Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here: <br />
<br />
<!--T:7--><br />
https://docs.openstack.org/swift/latest/s3_compat.html<br />
<br />
= Accessing and managing Object Store = <!--T:8--><br />
<br />
<!--T:10--><br />
You can manage your object storage using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers (not to be confused with containers based on namespace functionality of the Linux kernel). You can create containers (AKA buckets) in this interface, upload files, and create directories. Containers can also be created using S3-compatible CLI clients. <br />
Please note that if you create a new container as ''Public'', any object placed within this container can be freely accessed (read-only) by anyone on the internet simply by navigating to <code><nowiki>https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE></nowiki></code> with your container and object names inserted in place.<br />
<br />
<!--T:12--><br />
You can also use the OpenStack command line client.<br />
For instructions on how to install and operate the OpenStack command line clients, see [[OpenStack Command Line Clients]].<br />
<br />
<!--T:13--><br />
To generate your own S3 access ID and secret key for the S3 protocol, use the OpenStack command line client:<br />
<br />
<!--T:14--><br />
<code>openstack ec2 credentials create</code><br />
<br />
<!--T:15--><br />
The <tt>s3cmd</tt> tool which is available in Linux is the preferred way to access our S3 gateway; however there are [[Arbutus Object Storage Clients|other tools]] out there that will also work.<br />
<br />
<!--T:16--><br />
The users are responsible for operations inside the ''tenant''. As such, the buckets and management of those buckets are up to the user. <br />
<br />
=== General information === <!--T:17--><br />
<br />
<!--T:18--><br />
* Buckets are owned by the user who creates them, and no other user can manipulate them.<br />
* You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.<br />
* Bucket names must be unique across '''all''' users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named ''test'', but ''def-myname-test'' is probably OK.<br />
* Bucket policies are managed via json files.<br />
<br />
= Connection details and s3cmd Configuration = <!--T:19--><br />
<br />
<!--T:20--><br />
Object storage is accessible via an HTTPS endpoint:<br />
<br />
<!--T:21--><br />
<code>object-arbutus.cloud.computecanada.ca:443</code><br />
<br />
<!--T:22--><br />
The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:<br />
<br />
<!--T:23--><br />
<pre>[default]<br />
access_key = <redacted><br />
check_ssl_certificate = True<br />
check_ssl_hostname = True<br />
host_base = object-arbutus.cloud.computecanada.ca<br />
host_bucket = object-arbutus.cloud.computecanada.ca<br />
secret_key = <redacted><br />
use_https = True<br />
</pre><br />
<br />
<!--T:24--><br />
Using s3cmd's <code>--configure</code> feature is [[Arbutus_Object_Storage_Clients#Configuring_s3cmd | described here]].<br />
<br />
= Example operations on a bucket = <!--T:25--><br />
<br />
<!--T:26--><br />
<ul><br />
<li><p>Make a bucket public so that it is web accessible:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-public</code></p></li><br />
<li><p>Make the bucket private again:</p><br />
<p><code>s3cmd setacl s3://testbucket --acl-private</code></p></li><br />
</ul><br />
<br />
= Bucket policies =<br />
Currently Arbutus Object Storage only implements a subset of Amazon's specification for [[https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html|bucket polices]]. <br />
<br />
<li><p>Example bucket policy:</p><br />
<p>You need to first create a policy json file:</p><br />
<pre>{<br />
&quot;Version&quot;: &quot;2012-10-17&quot;,<br />
&quot;Id&quot;: &quot;S3PolicyId1&quot;,<br />
&quot;Statement&quot;: [<br />
{<br />
&quot;Sid&quot;: &quot;IPAllow&quot;,<br />
&quot;Effect&quot;: &quot;Deny&quot;,<br />
&quot;Principal&quot;: &quot;*&quot;,<br />
&quot;Action&quot;: &quot;s3:*&quot;,<br />
&quot;Resource&quot;: [<br />
&quot;arn:aws:s3:::testbucket&quot;,<br />
&quot;arn:aws:s3:::testbucket/*&quot;<br />
],<br />
&quot;Condition&quot;: {<br />
&quot;NotIpAddress&quot;: {<br />
&quot;aws:SourceIp&quot;: &quot;206.12.0.0/16&quot;<br />
&quot;aws:SourceIp&quot;: &quot;142.104.0.0/16&quot;<br />
}<br />
}<br />
}<br />
]<br />
}<br />
</pre><br />
<p>This file allows you to limit users of that bucket from certain source IP addresses. In this example the s3://testbucket is limited to the public IP address range (206.12.0.0/16) used by the Arbutus Cloud and the public IP address ranged (142.104.0.0/16) used by the University of Victoria.</p><br />
<br />
<p>Once you have your policy file, you can implement that policy on the bucket:</p><br />
<p><code>s3cmd setpolicy testbucket.policy s3://testbucket</code></p><br />
<br />
<p>To view the policy you can use the following command:</p><br />
<p><code>s3cmd info s3://testbucket</code></p><br />
<br />
</translate><br />
[[Category:CC-Cloud]]</div>Rmc