Zfs: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
Line 58: Line 58:
[name@server]$ sudo zfs set compression=lz4 data
[name@server]$ sudo zfs set compression=lz4 data
</source>
</source>
This will use the [https://en.wikipedia.org/wiki/LZ4_(compression_algorithm) lz4] compression algorithm on the zpool <code>data</code>. If your data is largely binary you might not see large reduction in storage use, however, if your data is something more compressible such, as ascii data, you may see a larger reduction in storage use. Using compression can also speed up your file IO because less data needs to read and written. However this can depend on the particular compression algorithm chosen and some particularly computationally intensive algrothims may actually reduce file IO rates. The lz4 algorithm was choosen because it is a resonable compromise between speed and amount of compression achieved, other compression algorithms may provide better compression or speed but likely not both.
This will use the [https://en.wikipedia.org/wiki/LZ4_(compression_algorithm) lz4] compression algorithm on the zpool <code>data</code>. If your data is largely binary you might not see large reduction in storage use, however, if your data is something more compressible, such as ascii data, you may see a larger reduction in storage use. Using compression can also speed up your file IO because less data needs to read and written. However this can depend on the particular compression algorithm chosen and some particularly computationally intensive algrothims may actually reduce file IO rates. The lz4 algorithm was choosen because it is a resonable compromise between speed and amount of compression achieved, other compression algorithms may provide better compression or speed but likely not both.


Check settings
Check settings

Revision as of 17:54, 1 March 2019


This article is a draft

This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.




ZFS is a combined file system and logical volume manager designed by Sun Microsystems. ZFS can scale to very large file system sizes and supports compression.

ZFS greatly simplifies the process of increasing a filesystem size as required. The simpliest approach is to add new volumes to your VM and then add them to your ZFS filesystem to grow the size of your file system. This can be done while the filesystem is live and file IO is occuring on the filesystem.

Installing ZFS

Starting with the image Ubuntu-18.04-Bionic-x64-2018-09

Ensure your package list is up-to-date and also do upgrades of your installed packages. While it isn't strictly nessacary to upgrade your installed packages it is a good idea.

[name@server]$ sudo apt-get update 
[name@server]$ sudo apt-get dist-upgrade -y

Next install ZFS

[name@server]$ sudo apt-get install zfsutils-linux

Starting with the image CentOS-7-x64-2018-09

[name@server]$ sudo yum install http://download.zfsonlinux.org/epel/zfs-release.el7_5.noarch.rpm
...
Total size: 2.9 k
Installed size: 2.9 k
Is this ok [y/d/N]: y
...

hmm... this is looking strangely more complicated, see for example [1]

Starting with the image Fedora-Cloud-Base-29-1.2

to be written!

Using ZFS

Creating a zpool

[name@server]$ sudo zpool create -f data /dev/vdb /dev/vdc

This will create a new mount point at /data backed by the volumes attached at /dev/vdb and /dev/vdc. The filesystem will have a size slightly smaller than the combined sizes of all attached volumes.

ZFS can compress data as it is written to the file system and uncompress it when it is read. To turn on and choose a compression algorithim for a zpool use the following command.

[name@server]$ sudo zfs set compression=lz4 data

This will use the lz4 compression algorithm on the zpool data. If your data is largely binary you might not see large reduction in storage use, however, if your data is something more compressible, such as ascii data, you may see a larger reduction in storage use. Using compression can also speed up your file IO because less data needs to read and written. However this can depend on the particular compression algorithm chosen and some particularly computationally intensive algrothims may actually reduce file IO rates. The lz4 algorithm was choosen because it is a resonable compromise between speed and amount of compression achieved, other compression algorithms may provide better compression or speed but likely not both.

Check settings

[name@server]$ sudo zfs get all data

Create a dataset within the data zpool

[name@server]$ sudo zfs create -p data/www

Growing a zpool

Add a new volume

[name@server]$ sudo zpool add data /dev/vde

Check pool status

[name@server]$ sudo zpool status data

Destroying a zpool

[name@server]$ sudo zpool destroy data

Notes

  • While in theory it should be possible to use ZFS with resizing volumes in OpenStack in practices this has not been straight forward and is better to be avoided if possible.
  • While there is no hard limit to how many volumes you can attach to your VM it is best to keep the number of volumes attached to a reasonable number. 19 attached volumes has been tested and shown to work well the Queens version of OpenStack.
  • Having a pool of two or more volumes may provide improved IO performance.

See also