CephFS - cephfs

Ceph is an open-source storage platform that stores its data in a storage cluster based on RADOS. It is highly scalable and, as a distributed system without a single point of failure, very reliable.

Ceph provides different components for block storage and for file systems.

CephFS is Ceph’s file system component that provides a robust, fully-featured POSIX-compliant distributed file system. Internally, it maps files to Ceph objects and stores file metadata (for example, file ownership, directory paths, access permissions) in a separate data pool.

Terminology

Ceph uses the term object for the data that it stores. The daemon that is responsible for storing and managing data is the Ceph OSD. Ceph’s storage is divided into pools, which are logical partitions for storing objects. They are also referred to as data pools, storage pools or OSD pools.

A CephFS file system consists of two OSD storage pools, one for the actual data and one for the file metadata.

cephfs driver in Incus

Note

The cephfs driver can only be used for custom storage volumes with content type filesystem.

For other storage volumes, use the Ceph driver. That driver can also be used for custom storage volumes with content type filesystem, but it implements them through Ceph RBD images.

Unlike other storage drivers, this driver does not set up the storage system but assumes that you already have a Ceph cluster installed.

You can either create the CephFS file system that you want to use beforehand and specify it through the source option, or specify the cephfs.create_missing option to automatically create the file system and the data and metadata OSD pools (with the names given in cephfs.data_pool and cephfs.meta_pool).

This driver also behaves differently than other drivers in that it provides remote storage. As a result and depending on the internal network, storage access might be a bit slower than for local storage. On the other hand, using remote storage has big advantages in a cluster setup, because all cluster members have access to the same storage pools with the exact same contents, without the need to synchronize storage pools.

Incus assumes that it has full control over the OSD storage pool. Therefore, you should never maintain any file system entities that are not owned by Incus in an Incus OSD storage pool, because Incus might delete them.

The cephfs driver in Incus supports snapshots if snapshots are enabled on the server side.

Configuration options

The following configuration options are available for storage pools that use the cephfs driver and for storage volumes in these pools.

Storage pool configuration

Key

Type

Default

Description

cephfs.cluster_name

string

ceph

Name of the Ceph cluster that contains the CephFS file system

cephfs.create_missing

bool

false

Create the file system and the missing data and metadata OSD pools

cephfs.data_pool

string

-

Data OSD pool name to create for the file system

cephfs.fscache

bool

false

Enable use of kernel fscache and cachefilesd

cephfs.meta_pool

string

-

Metadata OSD pool name to create for the file system

cephfs.osd_pg_num

string

-

OSD pool pg_num to use when creating missing OSD pools

cephfs.path

string

/

The base path for the CephFS mount

cephfs.user.name

string

admin

The Ceph user to use

source

string

-

Existing CephFS file system or file system path to use

volatile.pool.pristine

string

true

Whether the CephFS file system was empty on creation time

Tip

In addition to these configurations, you can also set default values for the storage volume configurations. See Configure default values for storage volumes.

Storage volume configuration

Key

Type

Condition

Default

Description

security.shared

bool

custom block volume

same as volume.security.shared or false

Enable sharing the volume across multiple instances

security.shifted

bool

custom volume

same as volume.security.shifted or false

Enable ID shifting overlay (allows attach by multiple isolated instances)

security.unmapped

bool

custom volume

same as volume.security.unmapped or false

Disable ID mapping for the volume

size

string

appropriate driver

same as volume.size

Size/quota of the storage volume

snapshots.expiry

string

custom volume

same as volume.snapshots.expiry

Controls when snapshots are to be deleted (expects an expression like 1M 2H 3d 4w 5m 6y)

snapshots.pattern

string

custom volume

same as volume.snapshots.pattern or snap%d

Pongo2 template string that represents the snapshot name (used for scheduled snapshots and unnamed snapshots) [1]

snapshots.schedule

string

custom volume

same as volume.snapshots.schedule

Cron expression (<minute> <hour> <dom> <month> <dow>), a comma-separated list of schedule aliases (@hourly, @daily, @midnight, @weekly, @monthly, @annually, @yearly), or empty to disable automatic snapshots (the default)