Arbutus object storage clients: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
(fix internal link broken by name change)
 
(72 intermediate revisions by 7 users not shown)
Line 1: Line 1:
{{Draft}}
<languages />
<translate>


= Arbutus Object Storage Clients =
<!--T:1-->
For information on obtaining Arbutus Object Storage, please see [[Arbutus object storage|this page]]. For information on how to use an object storage client to manage your Arbutus object store, choose a client and follow instructions from these pages:
* [[ Accessing object storage with s3cmd ]]
* [[ Accessing object storage with WinSCP ]]
* [[Accessing the Arbutus object storage with AWS CLI ]]


For information on obtaining Arbutus Object Storage, please see the page on Arbutus Object Storage
<!--T:2-->
It is important to note that Arbutus' Object Storage solution does not use Amazon's [https://documentation.help/s3-dg-20060301/VirtualHosting.html S3 Virtual Hosting] (i.e. DNS-based bucket) approach which these clients assume by default. They need to be configured not to use that approach, as described in the pages linked above.


This page describes how to configure and use two common object storage clients
<!--T:42-->
# s3cmd
[[Category:Cloud]]
# WinSCP
</translate>
 
== s3cmd ==
 
Depending on your Linux distribution, the <code>s3cmd</code> command can be installed using the appropriate yum or apt command:
 
<code>$ sudo yum install s3cmd</code><br/>
<code>$ sudo apt-get install s3cmd </code>
 
To configure the s3cmd tool use the command
<code>$ s3cmd --configure</code>
 
And make the following configurations:
<pre>
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
 
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key []: <b>20_DIGIT_ACCESS_KEY</b>
Secret Key []: <b>40_DIGIT_SECRET_KEY</b>
Default Region [US]:
 
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint []: object-arbutus.cloud.computecanada.ca
 
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket []: object-arbutus.cloud.computecanada.ca
 
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password []: PASSWORD
Path to GPG program []: /usr/bin/gpg
 
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol []: Yes
 
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
</pre>
 
The next task is to make a bucket.  Buckets contain files. Bucket names must be globally unique across the Arbutus object storage solution.  Therefore, you will need to create a uniquely named bucket which will not conflict with other users.  For example, the buckets "s3://test/" and "s3://data" are likely already taken.  Consider creating buckets reflective of your project, for example "s3://def-test-bucket1" or "s3://atlas_project_bucket".  Bucket names can only use the characters A-Z, a-z, 0-9, ., - and _.
 
To create a bucket, use the tool's <code>mb</code> (make bucket) command:
 
<code>$ s3cmd mb s3://BUCKET_NAME/</code>
 
To see the status of a bucket, use the command:
 
<code>$ s3cmd info s3://BUCKET_NAME/</code>
 
To upload a file to the bucket, use the command:
 
<code>$ s3cmd put --guess-mime-type FILE_NAME.dat s3://BUCKET_NAME/FILE_NAME.dat</code>
 
Buckets can have Access Control Lists (ACLs) and policies which govern who can access what resources in the object store.  These features are quite sophisticated.  Here are two simple examples of using ACLs using the tool's <code>setacl</code> command.
 
<code>$ s3cmd setacl --acl-public s3://BUCKET_NAME</code>
 
The result of this command is that anyone can access the bucket and the files in the bucket.  Files can be accessed via URLs such as https://object-arbutus.cloud.computecanada.ca/BUCKET_NAME/FILE_NAME.dat.
 
The second ACL example limits access to the bucket by the owner:
 
<code>$ s3cmd setacl --acl-privte s3://BUCKET_NAME</code>
 
Other more sophisticated examples can be found in the s3cmd man page.
 
 
 
 
 
 
== WinSCP ==
 
WinSCP can be installed https://winscp.net/

Latest revision as of 18:32, 20 June 2024

Other languages:

For information on obtaining Arbutus Object Storage, please see this page. For information on how to use an object storage client to manage your Arbutus object store, choose a client and follow instructions from these pages:

It is important to note that Arbutus' Object Storage solution does not use Amazon's S3 Virtual Hosting (i.e. DNS-based bucket) approach which these clients assume by default. They need to be configured not to use that approach, as described in the pages linked above.