Arbutus object storage: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
m (Fix typo & missing closing tag)
Line 34: Line 34:


<!--T:10-->
<!--T:10-->
You can interact with your Object Store using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers. In this context the two terms are interchangeable. Please note that if you create a new container as ''Publi, any object placed within this container can be accessed (read-only) by anyone freely on the internet simply by navigating to the following URL with your container and object names inserted in place:
You can interact with your Object Store using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers. In this context the two terms are interchangeable. Please note that if you create a new container as ''Public'', any object placed within this container can be accessed (read-only) by anyone freely on the internet simply by navigating to the following URL with your container and object names inserted in place:


<!--T:11-->
<!--T:11-->

Revision as of 22:19, 10 May 2022

Other languages:

Introduction[edit]

Object storage is a storage facility that is simpler than a normal hierarchical filesystem, but benefits by avoiding some performance bottlenecks.

An object is a fixed file in a flat namespace: you can create/upload an object as a whole, but cannot modify bytes within it. Objects are named as bucket:tag with no further nesting. Since bucket operations are basically whole-file, the provider can use a simpler internal representation. The flat namespace allows the provider to avoid metadata bottlenecks; it's basically a key-value store.

The best use of object storage is to store and export items which do not need hierarchical naming; are accessed mostly atomically and mostly read-only; and with simplified access-control rules.

All Arbutus projects are allocated a default 1TB of Object Store. If more is required, you can either apply for a RAS allocation or a RAC allocation.

We offer access to the Object Store via two different protocols: Swift or S3.

These protocols are very similar and in most situations you can use whichever you like. You don't have to commit to one, as buckets and objects created with Swift or S3 can be accessed using both protocols. There are a few key differences in the context of Arbutus Object Store.

Swift is given by default and is simpler since you do not have to manage credentials yourself. Access is governed using your Arbutus account. However, Swift does not replicate all the functionality of S3. The main use case here is when you want to manage your buckets using bucket policies you must use S3 as Swift does not support bucket policies. You can also create and manage your own keys using S3, which could be useful if you for example want to create a read-only user for a specific application. A full list of Swift/S3 compatibility can be found here:

https://docs.openstack.org/swift/latest/s3_compat.html

Accessing and managing Object Store[edit]

When requesting access we will ask you for the following:

You can interact with your Object Store using the Object Store tab for your project at https://arbutus.cloud.computecanada.ca/. This interface refers to buckets as containers. In this context the two terms are interchangeable. Please note that if you create a new container as Public, any object placed within this container can be accessed (read-only) by anyone freely on the internet simply by navigating to the following URL with your container and object names inserted in place:

https://object-arbutus.cloud.computecanada.ca/<YOUR CONTAINER NAME HERE>/<YOUR OBJECT NAME HERE>

You can also use the Swift command line tool included with the OpenStack command line clients. For instructions on how to install and operate the OpenStack command line clients, see OpenStack Command Line Clients.

If you wish to use the S3 protocol, you can generate your own S3 access and secret keys using the OpenStack command line client:

openstack ec2 credentials create

The s3cmd tool which is available in Linux is the preferred way to access our S3 gateway; however there are other tools out there that will also work.

The users are responsible for operations inside the tenant. As such, the buckets and management of those buckets are up to the user.

General information[edit]

  • Buckets are owned by the user who creates them, and no other user can manipulate them.
  • You can make a bucket accessible to the world, which then gives you a URL to share that will serve content from the bucket.
  • Bucket names must be unique across all users in the Object Store, so you may benefit by prefixing each bucket with your project name to maintain uniqueness. In other words, don't bother trying to create a bucket named test, but def-myname-test is probably OK.
  • Bucket policies are managed via json files.

Connection details and s3cmd Configuration[edit]

Object storage is accessible via an HTTPS endpoint:

object-arbutus.cloud.computecanada.ca:443

The following is an example of a minimal s3cmd configuration file. You will need these values, but are free to explore additional s3cmd configuration options to fit your use case. Note that in the example the keys are redacted and you will need to replace them with your provided key values:

[default]
access_key = <redacted>
check_ssl_certificate = True
check_ssl_hostname = True
host_base = object-arbutus.cloud.computecanada.ca
host_bucket = object-arbutus.cloud.computecanada.ca
secret_key = <redacted>
use_https = True

Using s3cmd's --configuration feature is described here.

Example operations on a bucket[edit]

  • Make a bucket public so that it is web accessible:

    s3cmd setacl s3://testbucket --acl-public

  • Make the bucket private again:

    s3cmd setacl s3://testbucket --acl-private

  • Example bucket policy:

    You need to first create a policy json file:

    "testbucket.policy": 
    {
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Principal": {"AWS": [
        "arn:aws:iam::rrg_cjhuofw:user/parsa7",
        "arn:aws:iam::rrg_cjhuofw:user/dilbar"
        ]},
        "Action": [
        "s3:ListBucket",
        "s3:PutObject",
        "s3:DeleteObject",
        "s3:GetObject"
        ],
        "Resource": [
        "arn:aws:s3:::testbucket/*",
        "arn:aws:s3:::testbucket"
        ]
    }]
    }
    

    This file allows you to set specific permissions for any number of users of that bucket.

    You can even specify users from another tenant if there is a user from another project working with you.

    Now that you have your policy file, you can implement that policy on the bucket:

    s3cmd setpolicy testbucket.policy s3://testbucket

    More extensive examples and actions can be found here: https://www.linode.com/docs/platform/object-storage/how-to-use-object-storage-acls-and-bucket-policies/