Arbutus Migration Guide: Difference between revisions
Line 140: | Line 140: | ||
== Methods to copy data == | == Methods to copy data == | ||
Here are two recommended approaches for copying data between instances running in the two clouds. The most appropriate method depends upon the size of the data volumes in your tenant. For very large volumes (e.g. greater than 5TB) Globus is recommended. Please see here for configuration details: [https://computecanada.github.io/DHSI-cloud-course/globus/ https://computecanada.github.io/DHSI-cloud-course/globus/] | |||
If you have very large volumes, we recommend | If you have very large volumes, we recommend you submit a support ticket as well. | ||
For volumes | For smaller volumes, rsync+ssh provides good transfer speeds and can (like Globus) work in an incremental way. A typical use case would be: | ||
# SSH to the West Cloud instance which has the large volume attached | # SSH to the West Cloud instance which has the large volume attached. Note the absolute path you want to copy to the instance on Arbutus Cloud. | ||
# Execute rsync over SSH | # Execute rsync over SSH. The example below assumes that password-less login via [[SSH keys]] has already been setup between the instances. Replace the placeholders below with real values: | ||
#: <code> rsync -avzP -e 'ssh -i ~/.ssh/key.pem' /local/path/ remoteuser@remotehost:/path/to/files/ </code> | #: <code> rsync -avzP -e 'ssh -i ~/.ssh/key.pem' /local/path/ remoteuser@remotehost:/path/to/files/ </code> | ||
# | # Verify that the data has been successfully copied on the instance in Arbutus Cloud. Then delete the data from the legacy West Cloud. | ||
You may also use any other method you are familiar with for transferring data. | You may also use any other method you are familiar with for transferring data. |
Revision as of 19:46, 26 October 2018
This document aims to describe how to migrate virtual machine (VM) instances from the legacy West Cloud to the new Arbutus Cloud. You know your workload best, so we recommend that you migrate your instances according to your own application requirements and schedule.
Preliminaries[edit]
Note the following URLs for accessing the Horizon Web UI for the two Clouds:
West Cloud (legacy): https://west.cloud.computecanada.ca
Arbutus Cloud (new): https://arbutus.cloud.computecanada.ca
Firefox and Chrome browsers are supported. Safari and Edge may work but have not been validated.
Your Project (Tenant), Network, and Router will be pre-created for you in Arbutus Cloud. User access will also be pre-populated.
Prior to migrating instances, we recommend that you complete the following preliminaries to prepare the necessary environment for migration.
- IMPORTANT: Back up any critical data! While the Cloud has redundant storage systems, no backups of any instances are taken.
- Get RC files (used to set environment variables used by the OpenStack command-line tools) after logging in to the URLs above with your Compute Canada credentials:
- West Cloud: Under Compute -> Access & Security -> API Access tab, select the “Download OpenStack RC File” button.
- Arbutus Cloud: Under Project -> API Access -> Download OpenStack RC File (use the OpenStack RC File (Identity API v3) option.
- Copy the OpenStack RC files to the migration host cloudmigration.computecanada.ca. Use your Compute Canada credentials for access.
- Open two SSH sessions to the migration host: One for the legacy cloud and one for the new cloud. We recommend that you use the
screen
command in your sessions to maintain them in case of SSH disconnections. (Consult the many screen tutorials available on the Internet if you have never used screen before.) In your legacy SSH session, source the RC file (source oldcloudrc.sh
) from the legacy cloud, and in the other SSH session, source the RC file from the new cloud (source newcloudrc.sh
). Test your configuration by running a simple openstack command, e.g.openstack list
- Migrate SSH keys:
- Using the Horizon dashboard on West Cloud, navigate to Access & Security -> Key Pairs. Click on the name of the key pair you want and copy the public key value.
- Using the Horizon dashboard on Arbutus Cloud, navigate to Compute -> Key Pairs.
- Click Import Public Key: give your Key Pair a name and paste in the public key from West Cloud.
- Your Key Pair should now be imported into Arbutus Cloud. Repeat the above steps for as many keys as you need.
- You can also generate new Key Pairs if you choose.
- Key Pairs can also be imported via the CLI as follows:
openstack keypair create --public-key <public-keyfile> <name>
- Migrate security groups and rules:
- On West Cloud, under Compute -> Access & Security -> Security Groups, note the existing security groups and their associated rules.
- On Arbutus Cloud, under Network -> Security Groups, re-create the security groups and their associated rules as needed.
- Do not delete any of the Egress security rules for IPv4 and IPv6 created by default. Deleting these rules can cause your instances to fail to retrieve configuration data from the OpenStack metadata service and a host of other issues.
- Security groups and rules can also be created via the CLI as follows. An example is shown for HTTP port 80 only; modify it according to your requirements:
openstack security group create <group-name>
openstack security group rule create --proto tcp --remote-ip 0.0.0.0/0 --dst-port 80 <group-name>
- To view rules via the CLI:
openstack security group list
to list the available security groups.openstack security group rule list
to view the rules in the group.
- Plan an outage window. Generally, shutting down services and then shutting down the instance is the best way to avoid corrupt or inconsistent data after the migration. Smaller volumes can be copied over fairly quickly, i.e. within minutes, but larger volumes will take longer. Plan for this. Additionally, floating IP addresses will change, so ensure the TTL of your DNS records is set to a small value so that the changes propagate as quickly as possible.
There are three general migration scenarios to consider.
Depending on your current setup, you may use any or all of these scenarios to migrate from the West Cloud to the Arbutus Cloud.
Manual or orchestrated migration[edit]
In this scenario, instances and volumes are created in Arbutus with the same specifications as that on West Cloud. The general approach is:
- Copy any Glance images from West Cloud to Arbutus Cloud if you are using any customized images. You may also simply start with a fresh base image in Arbutus Cloud.
- Install and configure services on the instance (or instances).
- Copy data from the old instances to the new instances; see methods to copy data below.
- Assign floating IP addresses to the new instances and update DNS.
- Decommission the old instances and delete old volumes.
The above steps can be done manually or orchestrated via various configuration management tools. The use of such tools is beyond the scope of this document, but if you were already using orchestration tools on West Cloud, they should work with Arbutus Cloud as well.
Migrating volume-backed instances[edit]
Volume-backed instances, as their name implies, have a persistent volume attached to them containing the operating system and any required data. Best practice is to use separate volumes for the operating system and for data.
Migration using Glance images[edit]
This method is recommended for volumes less than 150GB in size. For volumes larger than that, creating new volumes in Arbutus Cloud and copying the required data across from West Cloud is preferred to creating Glance images and transferring the images between clouds.
- Open two SSH sessions to the migration host cloudmigration.computecanada.ca with your Compute Canada credentials.
- In one session, source the OpenStack RC file for West Cloud. In the other session, source the OpenStack RC file for Arbutus Cloud. As mentioned earlier, use of the
screen
command is recommended in case of SSH disconnections. - In the West Cloud web UI, create an image of the desired volume (Compute -> Volumes and Upload to Image from the drop down menu). We recommend that the volume is not in use, but the force option can be used if it is. The command line can also be used to do this:
cinder --os-volume-api-version 2 upload-to-image <volumename> <imagename> --force
- Once the image is created, it will show up under Compute -> Images with the name you specified in the previous step. You can obtain the id of the image by clicking on the name.
- In the West Cloud session on the migration host, download the image (replace the <filename> and <image-id> with real values):
glance image-download --progress --file <filename> <image-id>
- In the Arbutus Cloud session on the migration host, upload the image (replace <filename> with the name from the previous step; <image-name> can be anything)
glance image-create --progress --visibility private --container-format bare --disk-format qcow2 --name <image-name> --file <filename>
- You can now create a volume from the uploaded image. In the Arbutus Cloud web UI, navigate to Compute -> Images. The uploaded image from the previous step should be there. In the drop down menu for the image, select the option Create Volume and the volume will be created from the image. The created volume can then be attached to instances or used to boot a new instance.
- Once you have migrated and validated your instances and volumes, and once all associated DNS records updated, please delete your old instances and volumes on the legacy West Cloud.
Alternative method: Migrating a volume-backed instance using Linux 'dd'[edit]
- Launch an instance on West Cloud with the smallest flavor possible “p1-1.5gb”. We will call this the "temporary migration host". The instructions below assume you choose CentOS 7 for this instance, but any Linux distribution with Python and Pip available should work.
- Log in to the instance via SSH and install the OpenStack CLI in a root shell:
yum install epel-release
yum install python-devel python-pip gcc
pip install python-openstackclient
- The OpenStack CLI should now be installed. To verify, try executing
openstack
on the command line. For further instructions, including installing the OpenStack CLI on systems other than CentOS, see: https://docs.openstack.org/newton/user-guide/common/cli-install-openstack-command-line-clients.html - Copy your OpenStack RC file from Arbutus to the temporary migration host and source it. Verify that you can connect to the OpenStack API on Arbutus by executing the following command:
openstack image list
- Delete the instance to be moved, but do NOT delete the volume it is attached to.
- The volume is now free to be attached to the temporary migration host we created. Attach the volume to the temporary migration host by going to Compute -> Volumes in the West Cloud web UI. Select “Manage Attachments” from the drop down menu and attach the volume to the temporary migration host.
- Note the device that the volume is attached as (typically
/dev/vdb
or/dev/vdc
). - Use the
dd
utility to create an image from the attached disk of the instance. You can call the image whatever you prefer; in the following example we've used “volumemigrate”. When the command completes, you will receive output showing the details of the image create:dd if=/dev/vdb | openstack image create --private --container-format bare --disk-format raw "volumemigrate"
- You should now be able to see the image under Compute -> Images in the Arbutus Cloud web UI. This image can now be used to launch instances on Arbutus. Make sure to create a new volume when launching the instance if you want the data to be persistent.
- Once you have migrated and validated your volumes and instances, and once any associated DNS records updated, please delete your old instances and volumes on the legacy West Cloud.
Migrating ephemeral instances[edit]
An ephemeral instance is an instance without a backing volume.
Migration using Glance images and volume snapshots[edit]
This method is recommended for instances with ephemeral storage less than 150GB in size. For instances with storage larger than that, creating new instances in Arbutus Cloud and copying the required data across from West Cloud is preferred to creating Glance images and transferring the images between clouds. In either case you will still need to copy data from any non-boot ephemeral storage (i.e. mounted under /mnt
) separately. Consult methods to copy data below for this.
- Open two SS sessions to the migration host cloudmigration.computecanada.ca with your Compute Canada credentials.
- In one session, source the OpenStack RC file for West Cloud. In the other session, source the OpenStack RC file for Arbutus Cloud. As mentioned earlier, use of the
screen
command is recommended in case of SSH disconnections. - In the West Cloud web UI, create a snapshot of the desired instance (Compute -> Instances and Create Snapshot from the drop down menu). The CLI can also be used:
nova list
nova image-create --poll <instancename> <snapshotname>
- The snapshot created in the previous step will show up under Compute -> Images. You can obtain the id of the snapshot by clicking on the name.
- In the West Cloud session on the migration host, download the snapshot (replace the <filename> and <imageid> with real values):
glance image-download --progress --file <filename> <imageid>
- In the Arbutus Cloud session on the migration host, upload the snapshot (replace the <filename> with the name from the previous step; the <imagename> can be anything)
glance image-create --progress --visibility private --container-format bare --disk-format qcow2 --name <imagename> --file <filename>
- New instances can now be launched on Arbutus Cloud from this image.
- Once you have migrated and validated your volumes and instances, and after any associated DNS records are updated, please delete your old instances on the legacy West Cloud.
Alternative method: Migrating an ephemeral instance using Linux 'dd'[edit]
- Login to the instance running on West Cloud via SSH. When migrating an ephemeral instance, it is important to shut down as many services as possible on the instance prior to migration e.g. httpd, databases, etc. Ideally, leave only SSH running.
- As root, install the OpenStack CLI if not already installed:
yum install epel-release
yum install python-devel python-pip gcc
pip install python-openstackclient
- The OpenStack CLI should now be installed. To verify, try executing
openstack
on the command line. For further instructions, including installing the OpenStack CLI on systems other than CentOS, see: https://docs.openstack.org/newton/user-guide/common/cli-install-openstack-command-line-clients.html - Copy your OpenStack RC file from Arbutus to the instance and source it. Verify that you can connect to the OpenStack API on Arbutus by executing the following command:
openstack image list
- The root disk on the instance is typically
/dev/vda1
; verify this using thedf
command. - Use the
dd
utility to create an image from the root disk of the instance. You can call the image whatever you prefer; in the following example we've used "ephemeralmigrate". When the command completes, you will receive output showing the details of the image created):dd if=/dev/vda | openstack image create --private --container-format bare --disk-format raw "ephemeralmigrate"
- You should now be able to see the image under Compute -> Images in the Arbutus Cloud web UI. This image can now be used to launch instances on Arbutus.
- Once you have migrated and validated your volumes and instances, and after any associated DNS records are updated, please delete your old instances on the legacy West Cloud.
Methods to copy data[edit]
Here are two recommended approaches for copying data between instances running in the two clouds. The most appropriate method depends upon the size of the data volumes in your tenant. For very large volumes (e.g. greater than 5TB) Globus is recommended. Please see here for configuration details: https://computecanada.github.io/DHSI-cloud-course/globus/
If you have very large volumes, we recommend you submit a support ticket as well.
For smaller volumes, rsync+ssh provides good transfer speeds and can (like Globus) work in an incremental way. A typical use case would be:
- SSH to the West Cloud instance which has the large volume attached. Note the absolute path you want to copy to the instance on Arbutus Cloud.
- Execute rsync over SSH. The example below assumes that password-less login via SSH keys has already been setup between the instances. Replace the placeholders below with real values:
rsync -avzP -e 'ssh -i ~/.ssh/key.pem' /local/path/ remoteuser@remotehost:/path/to/files/
- Verify that the data has been successfully copied on the instance in Arbutus Cloud. Then delete the data from the legacy West Cloud.
You may also use any other method you are familiar with for transferring data.
Support[edit]
Support requests can be sent to the usual Cloud support address at cloud@computecanada.ca