TrueNAS - truenas
¶
The truenas
storage driver in Incus¶
The truenas
storage driver enables an Incus node to use a remote
TrueNAS storage server to host one or more Incus storage pools. When the
node is part of a cluster, all cluster members can access the storage
pool simultaneously, making it ideal for use cases such as live
migrating virtual machines (VMs) between nodes.
The driver operates in a block-based manner, meaning that all Incus volumes are created as ZFS Volume block devices on the remote TrueNAS server. These ZFS Volume block devices are accessed on the local Incus node via iSCSI.
Modeled after the existing ZFS driver, the truenas
driver supports
most standard ZFS functionality, but operates on remote TrueNAS servers.
For instance, a local VM can be snapshotted and cloned, with the
snapshot and clone operations performed on the remote server after
synchronizing the local file system. The clone is then activated through
iSCSI as necessary.
Each storage pool corresponds to a ZFS dataset on a remote TrueNAS host. The dataset is created automatically if it does not exist. The driver uses ZFS features available on the remote host to support efficient image handling, copy operations, and snapshot management without requiring nested ZFS (ZFS-on-ZFS).
To reference a remote dataset, the source
property can be specified in the form:
[<remote host>:]<remote pool>[[/<remote dataset>]...][/]
If the path ends with a trailing /
, the dataset name will be derived
from the Incus storage pool name (e.g., tank/pool1
).
Requirements¶
The driver relies on the
truenas_incus_ctl
tool
to interact with the TrueNAS API and perform actions on the remote
server. This tool also manages the activation and deactivation of remote
ZFS Volumes via open-iscsi
. If truenas_incus_ctl
is not installed or
available in the system’s PATH, the driver will be disabled.
To install the required tool, download the latest version (v0.7.2+ is
required) from the truenas\_incus\_ctl
GitHub
page. Additionally,
ensure that open-iscsi
is installed on the system, which can be done
using:
`sudo apt install open-iscsi`
Logging in to the TrueNAS host¶
As an alternative to manually creating an API Key and supplying using the truenas.api_key
property, you can instead login
to the remote server using the truenas_incus_ctl
tool.
`sudo truenas_incus_ctl config login`
This will prompt you to provide connection details for the TrueNAS server, including authentication details, and will save the configuration to a local file. After logging in, you can verify the iSCSI setup with:
`sudo truenas_incus_ctl share iscsi setup --test`
Once the tool is configured, you can use it to interact with remote datasets and create storage pools:
`incus storage create <poolname> truenas source=[host:]<pool>[/<dataset>]/[remote-poolname]`
In this command:
source
refers to the location on the remote TrueNAS host where the storage pool will be created.host
is optional, and can be specified using thetruenas.host
property, or by specifying a configuration withtruenas.config
If
remote-poolname
is not supplied, it will default to the name of the local pool.
Configuration options¶
The following configuration options are available for storage pools that use the truenas
driver and for storage volumes in these pools.
Storage pool configuration¶
Key |
Type |
Default |
Description |
---|---|---|---|
|
string |
- |
ZFS dataset to use on the remote TrueNAS host. Format: |
|
boolean |
false |
If set to |
|
string |
- |
API key used to authenticate with the TrueNAS host. |
|
string |
- |
Remote dataset name. Typically inferred from |
|
string |
- |
Hostname or IP address of the remote TrueNAS system. Optional if included in the |
|
string |
- |
iSCSI initiator name used during block volume attachment. |
|
string |
- |
iSCSI portal address to use for block volume connections. |
Tip
In addition to these configurations, you can also set default values for the storage volume configurations. See Configure default values for storage volumes.
Storage volume configuration¶
Key |
Type |
Condition |
Default |
Description |
---|---|---|---|---|
|
string |
same as |
File system of the storage volume: |
|
|
string |
same as |
Mount options for block-backed file system volumes |
|
|
int |
custom volume with content type |
same as |
GID of the volume owner in the instance |
|
int |
custom volume with content type |
same as |
Mode of the volume in the instance |
|
int |
custom volume with content type |
same as |
UID of the volume owner in the instance |
|
bool |
custom block volume |
same as |
Enable sharing the volume across multiple instances |
|
bool |
custom volume |
same as |
Enable ID shifting overlay (allows attach by multiple isolated instances) |
|
bool |
custom volume |
same as |
Disable ID mapping for the volume |
|
string |
same as |
Size/quota of the storage volume |
|
|
string |
custom volume |
same as |
Controls when snapshots are to be deleted (expects an expression like |
|
string |
custom volume |
same as |
Controls when snapshots are to be deleted (expects an expression like |
|
string |
custom volume |
same as |
Pongo2 template string that represents the snapshot name (used for scheduled snapshots and unnamed snapshots) |
|
string |
custom volume |
same as |
Cron expression ( |
|
string |
same as |
Size of the ZFS block in range from 512 bytes to 16 MiB (must be power of 2) - for block volume, a maximum value of 128 KiB will be used even if a higher value is set |
|
|
bool |
same as |
Remove snapshots as needed |
|
|
bool |
same as |
Use |