ZFS - zfs
¶
ZFS combines both physical volume management and a file system. A ZFS installation can span across a series of storage devices and is very scalable, allowing you to add disks to expand the available space in the storage pool immediately.
ZFS is a block-based file system that protects against data corruption by using checksums to verify, confirm and correct every operation. To run at a sufficient speed, this mechanism requires a powerful environment with a lot of RAM.
In addition, ZFS offers snapshots and replication, RAID management, copy-on-write clones, compression and other features.
To use ZFS, make sure you have zfsutils-linux
installed on your machine.
Terminology¶
ZFS creates logical units based on physical storage devices.
These logical units are called ZFS pools or zpools.
Each zpool is then divided into a number of
A
ZFS filesystem can be seen as a partition or a mounted file system.A ZFS volume represents a block device.
A ZFS snapshot captures a specific state of either a
ZFS filesystem or a ZFS volume. ZFS snapshots are read-only.A ZFS clone is a writable copy of a ZFS snapshot.
zfs
driver in Incus¶
The zfs
driver in Incus uses
Incus assumes that it has full control over the ZFS pool and
Due to the way copy-on-write works in ZFS, parent deleted/
path until all references are gone and the object can safely be removed.
Note that this method might have ramifications for restoring snapshots.
See Limitations below.
Incus automatically enables trimming support on all newly created pools on ZFS 0.8 or later. This increases the lifetime of SSDs by allowing better block re-use by the controller, and it also allows to free space on the root file system when using a loop-backed ZFS pool. If you are running a ZFS version earlier than 0.8 and want to enable trimming, upgrade to at least version 0.8. Then use the following commands to make sure that trimming is automatically enabled for the ZFS pool in the future and trim all currently unused space:
zpool upgrade ZPOOL-NAME
zpool set autotrim=on ZPOOL-NAME
zpool trim ZPOOL-NAME
Limitations¶
The zfs
driver has the following limitations:
- Restoring from older snapshots
ZFS doesn’t support restoring from snapshots other than the latest one. You can, however, create new instances from older snapshots. This method makes it possible to confirm whether a specific snapshot contains what you need. After determining the correct snapshot, you can remove the newer snapshots so that the snapshot you need is the latest one and you can restore it.
Alternatively, you can configure Incus to automatically discard the newer snapshots during restore. To do so, set the
zfs.remove_snapshots
configuration for the volume (or the correspondingvolume.zfs.remove_snapshots
configuration on the storage pool for all volumes in the pool).Note, however, that if
zfs.clone_copy
is set totrue
, instance copies use ZFS snapshots too. In that case, you cannot restore an instance to a snapshot taken before the last copy without having to also delete all its descendants. If this is not an option, you can copy the wanted snapshot into a new instance and then delete the old instance. You will, however, lose any other snapshots the instance might have had.- Observing I/O quotas
I/O quotas are unlikely to affect
ZFS filesystems very much. That’s because ZFS is a port of a Solaris module (using SPL) and not a native Linux file system using the Linux VFS API, which is where I/O limits are applied.- Feature support in ZFS
Some features, like the use of idmaps or delegation of a ZFS dataset, require ZFS 2.2 or higher and are therefore not widely available yet.
Quotas¶
ZFS provides two different quota properties: quota
and refquota
.
quota
restricts the total size of a refquota
restricts only the size of the data in the
By default, Incus uses the quota
property when you set up a quota for your storage volume.
If you want to use the refquota
property instead, set the zfs.use_refquota
configuration for the volume (or the corresponding volume.zfs.use_refquota
configuration on the storage pool for all volumes in the pool).
You can also set the zfs.use_reserve_space
(or volume.zfs.use_reserve_space
) configuration to use ZFS reservation
or refreservation
along with quota
or refquota
.
Configuration options¶
The following configuration options are available for storage pools that use the zfs
driver and for storage volumes in these pools.
Storage pool configuration¶
Key |
Type |
Default |
Description |
---|---|---|---|
|
string |
auto (20% of free disk space, >= 5 GiB and <= 30 GiB) |
Size of the storage pool when creating loop-based pools (in bytes, suffixes supported, can be increased to grow storage pool) |
|
string |
- |
Path to existing block device(s), loop file or ZFS dataset/pool. Multiple block devices should be separated by |
|
bool |
|
Wipe the block device specified in |
|
string |
|
Whether to use ZFS lightweight clones rather than full |
|
bool |
|
Disable zpool export while unmount performed |
|
string |
name of the pool |
Name of the zpool |
Tip
In addition to these configurations, you can also set default values for the storage volume configurations. See Configure default values for storage volumes.
Storage volume configuration¶
Key |
Type |
Condition |
Default |
Description |
---|---|---|---|---|
|
string |
block-based volume with content type |
same as |
File system of the storage volume: |
|
string |
block-based volume with content type |
same as |
Mount options for block-backed file system volumes |
|
int |
custom volume with content type |
same as |
GID of the volume owner in the instance |
|
int |
custom volume with content type |
same as |
Mode of the volume in the instance |
|
int |
custom volume with content type |
same as |
UID of the volume owner in the instance |
|
bool |
custom block volume |
same as |
Enable sharing the volume across multiple instances |
|
bool |
custom volume |
same as |
Enable ID shifting overlay (allows attach by multiple isolated instances) |
|
bool |
custom volume |
same as |
Disable ID mapping for the volume |
|
string |
same as |
Size/quota of the storage volume |
|
|
string |
custom volume |
same as |
Controls when snapshots are to be deleted (expects an expression like |
|
string |
custom volume |
same as |
Pongo2 template string that represents the snapshot name (used for scheduled snapshots and unnamed snapshots) [1] |
|
string |
custom volume |
same as |
Cron expression ( |
|
string |
same as |
Size of the ZFS block in range from 512 to 16 MiB (must be power of 2) - for block volume, a maximum value of 128 KiB will be used even if a higher value is set |
|
|
bool |
same as |
Whether to use a formatted |
|
|
bool |
ZFS 2.2 or higher |
same as |
Controls whether to delegate the ZFS dataset and anything underneath it to the container(s) using it. Allows the use of the |
|
bool |
same as |
Remove snapshots as needed |
|
|
bool |
same as |
Use |
|
|
bool |
same as |
Use |
Storage bucket configuration¶
To enable storage buckets for local storage pool drivers and allow applications to access the buckets via the S3 protocol, you must configure the core.storage_buckets_address
server setting.
Key |
Type |
Condition |
Default |
Description |
---|---|---|---|---|
|
string |
appropriate driver |
same as |
Size/quota of the storage bucket |