LVM - lvm

LVM is a storage management framework rather than a file system. It is used to manage physical storage devices, allowing you to create a number of logical storage volumes that use and virtualize the underlying physical storage devices.

Note that it is possible to over-commit the physical storage in the process, to allow flexibility for scenarios where not all available storage is in use at the same time.

To use LVM, make sure you have lvm2 installed on your machine.

Terminology

LVM can combine several physical storage devices into a volume group. You can then allocate logical volumes of different types from this volume group.

One supported volume type is a thin pool, which allows over-committing the resources by creating thinly provisioned volumes whose total allowed maximum size is larger than the available physical storage. Another type is a volume snapshot, which captures a specific state of a logical volume.

lvm driver in Incus

The lvm driver in Incus uses logical volumes for images, and volume snapshots for instances and snapshots.

Incus assumes that it has full control over the volume group. Therefore, you should not maintain any file system entities that are not owned by Incus in an LVM volume group, because Incus might delete them. However, if you need to reuse an existing volume group (for example, because your setup has only one volume group), you can do so by setting the lvm.vg.force_reuse configuration.

By default, LVM storage pools use an LVM thin pool and create logical volumes for all Incus storage entities (images, instances and custom volumes) in there. This behavior can be changed by setting lvm.use_thinpool to false when you create the pool. In this case, Incus uses “normal” logical volumes for all storage entities that are not snapshots. Note that this entails serious performance and space reductions for the lvm driver (close to the dir driver both in speed and storage usage). The reason for this is that most storage operations must fall back to using rsync, because logical volumes that are not thin pools do not support snapshots of snapshots. In addition, non-thin snapshots take up much more storage space than thin snapshots, because they must reserve space for their maximum size at creation time. Therefore, this option should only be chosen if the use case requires it.

For environments with a high instance turnover (for example, continuous integration) you should tweak the backup retain_min and retain_days settings in /etc/lvm/lvm.conf to avoid slowdowns when interacting with Incus.

lvmcluster driver in Incus

A second lvmcluster driver is available for use within clusters.

This relies on the lvmlockd and sanlock daemons to provide distributed locking over a shared disk or set of disks.

It allows using a remote shared block device like a FiberChannel LUN, NVMEoF/NVMEoTCP disk or iSCSI drive as the backing for a LVM storage pool.

Note

Thin provisioning is incompatible with clustered LVM, so expect higher disk usage.

To use this with Incus, you must:

  • Have a shared block device available on all your cluster members

  • Install the relevant packages for lvm, lvmlockd and sanlock

  • Enable lvmlockd by setting use_lvmlockd = 1 in your /etc/lvm/lvm.conf

  • Set a unique (within your cluster) host_id value in /etc/lvm/lvmlocal.conf

  • Ensure that both lvmlockd and sanlock daemons are running

Configuration options

The following configuration options are available for storage pools that use the lvm driver and for storage volumes in these pools.

Storage pool configuration

Key

Type

Driver

Default

Description

lvm.thinpool_name

string

lvm

IncusThinPool

Thin pool where volumes are created

lvm.thinpool_metadata_size

string

lvm

0 (auto)

The size of the thin pool metadata volume (the default is to let LVM calculate an appropriate size)

lvm.metadata_size

string

lvm

0 (auto)

The size of the metadata space for the physical volume

lvm.use_thinpool

bool

lvm

true

Whether the storage pool uses a thin pool for logical volumes

lvm.vg.force_reuse

bool

lvm

false

Force using an existing non-empty volume group

lvm.vg_name

string

all

name of the pool

Name of the volume group to create

rsync.bwlimit

string

all

0 (no limit)

The upper limit to be placed on the socket I/O when rsync must be used to transfer storage entities

rsync.compression

bool

all

true

Whether to use compression while migrating storage pools

size

string

lvm

auto (20% of free disk space, >= 5 GiB and <= 30 GiB)

Size of the storage pool when creating loop-based pools (in bytes, suffixes supported, can be increased to grow storage pool)

source

string

all

-

Path to an existing block device, loop file or LVM volume group

source.wipe

bool

lvm

false

Wipe the block device specified in source prior to creating the storage pool

Tip

In addition to these configurations, you can also set default values for the storage volume configurations. See Configure default values for storage volumes.

Storage volume configuration

Key

Type

Condition

Default

Description

block.filesystem

string

block-based volume with content type filesystem

same as volume.block.filesystem

File system of the storage volume: btrfs, ext4 or xfs (ext4 if not set)

block.mount_options

string

block-based volume with content type filesystem

same as volume.block.mount_options

Mount options for block-backed file system volumes

initial.gid

int

custom volume with content type filesystem

same as volume.initial.uid or 0

GID of the volume owner in the instance

initial.mode

int

custom volume with content type filesystem

same as volume.initial.mode or 711

Mode of the volume in the instance

initial.uid

int

custom volume with content type filesystem

same as volume.initial.gid or 0

UID of the volume owner in the instance

lvm.stripes

string

same as volume.lvm.stripes

Number of stripes to use for new volumes (or thin pool volume)

lvm.stripes.size

string

same as volume.lvm.stripes.size

Size of stripes to use (at least 4096 bytes and multiple of 512 bytes)

security.shifted

bool

custom volume

same as volume.security.shifted or false

Enable ID shifting overlay (allows attach by multiple isolated instances)

security.unmapped

bool

custom volume

same as volume.security.unmapped or false

Disable ID mapping for the volume

security.shared

bool

custom block volume

same as volume.security.shared or false

Enable sharing the volume across multiple instances

size

string

same as volume.size

Size/quota of the storage volume

snapshots.expiry

string

custom volume

same as volume.snapshots.expiry

Controls when snapshots are to be deleted (expects an expression like 1M 2H 3d 4w 5m 6y)

snapshots.pattern

string

custom volume

same as volume.snapshots.pattern or snap%d

Pongo2 template string that represents the snapshot name (used for scheduled snapshots and unnamed snapshots) [1]

snapshots.schedule

string

custom volume

same as volume.snapshots.schedule

Cron expression (<minute> <hour> <dom> <month> <dow>), a comma-separated list of schedule aliases (@hourly, @daily, @midnight, @weekly, @monthly, @annually, @yearly), or empty to disable automatic snapshots (the default)

Storage bucket configuration

To enable storage buckets for local storage pool drivers and allow applications to access the buckets via the S3 protocol, you must configure the core.storage_buckets_address server setting.

Key

Type

Condition

Default

Description

size

string

appropriate driver

same as volume.size

Size/quota of the storage bucket