Instance options¶
Instance options are configuration options that are directly related to the instance.
See Configure instance options for instructions on how to set the instance options.
The key/value configuration is namespaced. The following options are available:
Note that while a type is defined for each option, all values are stored as strings and should be exported over the REST API as strings (which makes it possible to support any extra values without breaking backward compatibility).
Miscellaneous options¶
In addition to the configuration options listed in the following sections, these instance options are supported:
Key: | agent.nic_config |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | virtual machine |
For containers, the name and MTU of the default network interfaces is used for the instance devices.
For virtual machines, set this option to true
to set the name and MTU of the default network interfaces to be the same as the instance devices.
Key: | cluster.evacuate |
Type: | string |
Default: |
|
Live update: | no |
The cluster.evacuate
provides control over how instances are handled when a cluster member is being
evacuated.
Available Modes:
auto
(default): The system will automatically decide the best evacuation method based on the instance’s type and configured devices:If any device is not suitable for migration, the instance will not be migrated (only stopped).
Live migration will be used only for virtual machines with the
migration.stateful
setting enabled and for which all its devices can be migrated as well.
live-migrate
: Instances are live-migrated to another server. This means the instance remains running and operational during the migration process, ensuring minimal disruption.migrate
: In this mode, instances are migrated to another server in the cluster. The migration process will not be live, meaning there will be a brief downtime for the instance during the migration.stop
: Instances are not migrated. Instead, they are stopped on the current server.stateful-stop
: Instances are not migrated. Instead, they are stopped on the current server but with their runtime state (memory) stored on disk for resuming on restore.force-stop
: Instances are not migrated. Instead, they are forcefully stopped.
See Evacuate and restore cluster members for more information.
Key: | linux.kernel_modules |
Type: | string |
Live update: | yes |
Condition: | container |
Specify the kernel modules as a comma-separated list.
Key: | linux.sysctl.* |
Type: | string |
Live update: | no |
Condition: | container |
Key: | user.* |
Type: | string |
Live update: | no |
User keys can be used in search.
Key: | environment.* |
Type: | string |
Live update: | yes (exec) |
You can export key/value environment variables to the instance.
These are then set for incus exec
.
cloud-init
configuration¶
The following instance options control the cloud-init
configuration of the instance:
Key: | cloud-init.network-config |
Type: | string |
Default: |
|
Live update: | no |
Condition: | If supported by image |
The content is used as seed value for cloud-init
.
Key: | cloud-init.user-data |
Type: | string |
Default: |
|
Live update: | no |
Condition: | If supported by image |
The content is used as seed value for cloud-init
.
Key: | cloud-init.vendor-data |
Type: | string |
Default: |
|
Live update: | no |
Condition: | If supported by image |
The content is used as seed value for cloud-init
.
Key: | user.network-config |
Type: | string |
Default: |
|
Live update: | no |
Condition: | If supported by image |
Key: | user.user-data |
Type: | string |
Default: |
|
Live update: | no |
Condition: | If supported by image |
Key: | user.vendor-data |
Type: | string |
Default: |
|
Live update: | no |
Condition: | If supported by image |
Support for these options depends on the image that is used and is not guaranteed.
If you specify both cloud-init.user-data
and cloud-init.vendor-data
, the content of both options is merged.
Therefore, make sure that the cloud-init
configuration you specify in those options does not contain the same keys.
Resource limits¶
The following instance options specify resource limits for the instance:
Key: | limits.cpu |
Type: | string |
Default: | 1 (VMs) |
Live update: | yes |
A number or a specific range of CPUs to expose to the instance.
See CPU pinning for more information.
Key: | limits.cpu.allowance |
Type: | string |
Default: | 100% |
Live update: | yes |
Condition: | container |
To control how much of the CPU can be used, specify either a percentage (50%
) for a soft limit
or a chunk of time (25ms/100ms
) for a hard limit.
See Allowance and priority (container only) for more information.
Key: | limits.cpu.nodes |
Type: | string |
Live update: | yes |
A comma-separated list of NUMA node IDs or ranges to place the instance CPUs on.
Alternatively, the value balanced
may be used to have Incus pick the least busy NUMA node on startup.
See Allowance and priority (container only) for more information.
Key: | limits.cpu.priority |
Type: | integer |
Default: |
|
Live update: | yes |
Condition: | container |
When overcommitting resources, specify the CPU scheduling priority compared to other instances that share the same CPUs. Specify an integer between 0 and 10.
See Allowance and priority (container only) for more information.
Key: | limits.disk.priority |
Type: | integer |
Default: |
|
Live update: | yes |
Controls how much priority to give to the instance’s I/O requests when under load.
Specify an integer between 0 and 10.
Key: | limits.hugepages.1GB |
Type: | string |
Live update: | yes |
Condition: | container |
Fixed value (in bytes) to limit the number of 1 GB huge pages. Various suffixes are supported (see Units for storage, memory and network limits).
See Huge page limits for more information.
Key: | limits.hugepages.1MB |
Type: | string |
Live update: | yes |
Condition: | container |
Fixed value (in bytes) to limit the number of 1 MB huge pages. Various suffixes are supported (see Units for storage, memory and network limits).
See Huge page limits for more information.
Key: | limits.hugepages.2MB |
Type: | string |
Live update: | yes |
Condition: | container |
Fixed value (in bytes) to limit the number of 2 MB huge pages. Various suffixes are supported (see Units for storage, memory and network limits).
See Huge page limits for more information.
Key: | limits.hugepages.64KB |
Type: | string |
Live update: | yes |
Condition: | container |
Fixed value (in bytes) to limit the number of 64 KB huge pages. Various suffixes are supported (see Units for storage, memory and network limits).
See Huge page limits for more information.
Key: | limits.memory |
Type: | string |
Default: |
|
Live update: | yes |
Percentage of the host’s memory or a fixed value in bytes. Various suffixes are supported.
See Units for storage, memory and network limits for details.
Key: | limits.memory.enforce |
Type: | string |
Default: |
|
Live update: | yes |
Condition: | container |
If the instance’s memory limit is hard
, the instance cannot exceed its limit.
If it is soft
, the instance can exceed its memory limit when extra host memory is available.
Key: | limits.memory.hugepages |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | virtual machine |
If this option is set to false
, regular system memory is used.
Key: | limits.memory.swap |
Type: | string |
Default: |
|
Live update: | yes |
Condition: | container |
When set to true
or false
, it controls whether the container is likely to get some of
its memory swapped by the kernel. Alternatively, it can be set to a bytes value which will
then allow the container to make use of additional memory through swap.
Key: | limits.memory.swap.priority |
Type: | integer |
Default: |
|
Live update: | yes |
Condition: | container |
Specify an integer between 0 and 10. The higher the value, the less likely the instance is to be swapped to disk.
Key: | limits.processes |
Type: | integer |
Default: | empty |
Live update: | yes |
Condition: | container |
If left empty, no limit is set.
Key: | limits.kernel.* |
Type: | string |
Live update: | no |
Condition: | container |
You can set kernel limits on an instance, for example, you can limit the number of open files. See Kernel resource limits for more information.
CPU limits¶
You have different options to limit CPU usage:
Set
limits.cpu
to restrict which CPUs the instance can see and use. See CPU pinning for how to set this option.Set
limits.cpu.allowance
to restrict the load an instance can put on the available CPUs. This option is available only for containers. See Allowance and priority (container only) for how to set this option.
It is possible to set both options at the same time to restrict both which CPUs are visible to the instance and the allowed usage of those instances.
However, if you use limits.cpu.allowance
with a time limit, you should avoid using limits.cpu
in addition, because that puts a lot of constraints on the scheduler and might lead to less efficient allocations.
The CPU limits are implemented through a mix of the cpuset
and cpu
cgroup controllers.
CPU pinning¶
limits.cpu
results in CPU pinning through the cpuset
controller.
You can specify either which CPUs or how many CPUs are visible and available to the instance:
To specify which CPUs to use, set
limits.cpu
to either a set of CPUs (for example,1,2,3
) or a CPU range (for example,0-3
).To pin to a single CPU, use the range syntax (for example,
1-1
) to differentiate it from a number of CPUs.If you specify a number (for example,
4
) of CPUs, Incus will do dynamic load-balancing of all instances that aren’t pinned to specific CPUs, trying to spread the load on the machine. Instances are re-balanced every time an instance starts or stops, as well as whenever a CPU is added to the system.
CPU limits for virtual machines¶
Note
Incus supports live-updating the limits.cpu
option.
However, for virtual machines, this only means that the respective CPUs are hotplugged.
Depending on the guest operating system, you might need to either restart the instance or complete some manual actions to bring the new CPUs online.
Incus virtual machines default to having just one vCPU allocated, which shows up as matching the host CPU vendor and type, but has a single core and no threads.
When limits.cpu
is set to a single integer, Incus allocates multiple vCPUs and exposes them to the guest as full cores.
Those vCPUs are not pinned to specific physical cores on the host.
The number of vCPUs can be updated while the VM is running.
When limits.cpu
is set to a range or comma-separated list of CPU IDs (as provided by incus info --resources
), the vCPUs are pinned to those physical cores.
In this scenario, Incus checks whether the CPU configuration lines up with a realistic hardware topology and if it does, it replicates that topology in the guest.
When doing CPU pinning, it is not possible to change the configuration while the VM is running.
For example, if the pinning configuration includes eight threads, with each pair of thread coming from the same core and an even number of cores spread across two CPUs, the guest will show two CPUs, each with two cores and each core with two threads. The NUMA layout is similarly replicated and in this scenario, the guest would most likely end up with two NUMA nodes, one for each CPU socket.
In such an environment with multiple NUMA nodes, the memory is similarly divided across NUMA nodes and be pinned accordingly on the host and then exposed to the guest.
All this allows for very high performance operations in the guest as the guest scheduler can properly reason about sockets, cores and threads as well as consider NUMA topology when sharing memory or moving processes across NUMA nodes.
Allowance and priority (container only)¶
limits.cpu.allowance
drives either the CFS scheduler quotas when passed a time constraint, or the generic CPU shares mechanism when passed a percentage value:
The time constraint (for example,
20ms/50ms
) is a hard limit. For example, if you want to allow the container to use a maximum of one CPU, setlimits.cpu.allowance
to a value like100ms/100ms
. The value is relative to one CPU worth of time, so to restrict to two CPUs worth of time, use something like100ms/50ms
or200ms/100ms
.When using a percentage value, the limit is a soft limit that is applied only when under load. It is used to calculate the scheduler priority for the instance, relative to any other instance that is using the same CPU or CPUs. For example, to limit the CPU usage of the container to one CPU when under load, set
limits.cpu.allowance
to100%
.
limits.cpu.priority
is another factor that is used to compute the scheduler priority score when a number of instances sharing a set of CPUs have the same percentage of CPU assigned to them.
Huge page limits¶
Incus allows to limit the number of huge pages available to a container through the limits.hugepage.[size]
key.
Architectures often expose multiple huge-page sizes. The available huge-page sizes depend on the architecture.
Setting limits for huge pages is especially useful when Incus is configured to intercept the mount
syscall for the hugetlbfs
file system in unprivileged containers.
When Incus intercepts a hugetlbfs
mount
syscall, it mounts the hugetlbfs
file system for a container with correct uid
and gid
values as mount options.
This makes it possible to use huge pages from unprivileged containers.
However, it is recommended to limit the number of huge pages available to the container through limits.hugepages.[size]
to stop the container from being able to exhaust the huge pages available to the host.
Limiting huge pages is done through the hugetlb
cgroup controller, which means that the host system must expose the hugetlb
controller in the legacy or unified cgroup hierarchy for these limits to apply.
Kernel resource limits¶
For container instances, Incus exposes a generic namespaced key limits.kernel.*
that can be used to set resource limits.
It is generic in the sense that Incus does not perform any validation on the resource that is specified following the limits.kernel.*
prefix.
Incus cannot know about all the possible resources that a given kernel supports.
Instead, Incus simply passes down the corresponding resource key after the limits.kernel.*
prefix and its value to the kernel.
The kernel does the appropriate validation.
This allows users to specify any supported limit on their system.
Some common limits are:
Key: | limits.kernel.as |
Type: | string |
Resource: |
|
Key: | limits.kernel.core |
Type: | string |
Resource: |
|
Key: | limits.kernel.cpu |
Type: | string |
Resource: |
|
Key: | limits.kernel.data |
Type: | string |
Resource: |
|
Key: | limits.kernel.fsize |
Type: | string |
Resource: |
|
Key: | limits.kernel.locks |
Type: | string |
Resource: |
|
Key: | limits.kernel.memlock |
Type: | string |
Resource: |
|
Key: | limits.kernel.nice |
Type: | string |
Resource: |
|
Key: | limits.kernel.nofile |
Type: | string |
Resource: |
|
limits.kernel.nproc
Maximum number of processes that can be created for the user of the calling process
Key: | limits.kernel.nproc |
Type: | string |
Resource: |
|
Key: | limits.kernel.rtprio |
Type: | string |
Resource: |
|
Key: | limits.kernel.sigpending |
Type: | string |
Resource: |
|
A full list of all available limits can be found in the manpages for the getrlimit(2)
/setrlimit(2)
system calls.
To specify a limit within the limits.kernel.*
namespace, use the resource name in lowercase without the RLIMIT_
prefix.
For example, RLIMIT_NOFILE
should be specified as nofile
.
A limit is specified as two colon-separated values that are either numeric or the word unlimited
(for example, limits.kernel.nofile=1000:2000
).
A single value can be used as a shortcut to set both soft and hard limit to the same value (for example, limits.kernel.nofile=3000
).
A resource with no explicitly configured limit will inherit its limit from the process that starts up the container. Note that this inheritance is not enforced by Incus but by the kernel.
Migration options¶
The following instance options control the behavior if the instance is moved from one Incus server to another:
Key: | migration.incremental.memory |
Type: | bool |
Default: |
|
Live update: | yes |
Condition: | container |
Using incremental memory transfer of the instance’s memory can reduce downtime.
Key: | migration.incremental.memory.goal |
Type: | integer |
Default: |
|
Live update: | yes |
Condition: | container |
NVIDIA and CUDA configuration¶
The following instance options specify the NVIDIA and CUDA configuration of the instance:
Key: | nvidia.driver.capabilities |
Type: | string |
Default: |
|
Live update: | no |
Condition: | container |
The specified driver capabilities are used to set libnvidia-container NVIDIA_DRIVER_CAPABILITIES
.
Key: | nvidia.require.cuda |
Type: | string |
Live update: | no |
Condition: | container |
The specified version expression is used to set libnvidia-container NVIDIA_REQUIRE_CUDA
.
Key: | nvidia.require.driver |
Type: | string |
Live update: | no |
Condition: | container |
The specified version expression is used to set libnvidia-container NVIDIA_REQUIRE_DRIVER
.
Raw instance configuration overrides¶
The following instance options allow direct interaction with the backend features that Incus itself uses:
Key: | raw.apparmor |
Type: | blob |
Live update: | yes |
The specified entries are appended to the generated profile.
Key: | raw.idmap |
Type: | blob |
Live update: | no |
Condition: | unprivileged container |
For example: both 1000 1000
Key: | raw.lxc |
Type: | blob |
Live update: | no |
Condition: | container |
Key: | raw.qemu |
Type: | blob |
Live update: | no |
Condition: | virtual machine |
Key: | raw.qemu.conf |
Type: | blob |
Live update: | no |
Condition: | virtual machine |
See Override QEMU configuration for more information.
Key: | raw.qemu.qmp.early |
Type: | blob |
Live update: | no |
Condition: | virtual machine |
Key: | raw.qemu.qmp.post-start |
Type: | blob |
Live update: | no |
Condition: | virtual machine |
raw.qemu.qmp.pre-start
QMP commands to run after Incus QEMU initialization and before the VM has started
Key: | raw.qemu.qmp.pre-start |
Type: | blob |
Live update: | no |
Condition: | virtual machine |
Key: | raw.qemu.scriptlet |
Type: | string |
Live update: | no |
Condition: | virtual machine |
Key: | raw.seccomp |
Type: | blob |
Live update: | no |
Condition: | container |
Important
Setting these raw.*
keys might break Incus in non-obvious ways.
Therefore, you should avoid setting any of these keys.
Override QEMU configuration¶
For VM instances, Incus configures QEMU through a configuration file that is passed to QEMU with the -readconfig
command-line option.
This configuration file is generated for each instance before boot.
It can be found at /run/incus/<instance_name>/qemu.conf
.
The default configuration works fine for Incus’ most common use case: modern UEFI guests with VirtIO devices. In some situations, however, you might need to override the generated configuration. For example:
To run an old guest OS that doesn’t support UEFI.
To specify custom virtual devices when VirtIO is not supported by the guest OS.
To add devices that are not supported by Incus before the machines boots.
To remove devices that conflict with the guest OS.
To override the configuration, set the raw.qemu.conf
option.
It supports a format similar to qemu.conf
, with some additions.
Since it is a multi-line configuration option, you can use it to modify multiple sections or keys.
To replace a section or key in the generated configuration file, add a section with a different value.
For example, use the following section to override the default
virtio-gpu-pci
GPU driver:raw.qemu.conf: |- [device "qemu_gpu"] driver = "qxl-vga"
To remove a section, specify a section without any keys. For example:
raw.qemu.conf: |- [device "qemu_gpu"]
To remove a key, specify an empty string as the value. For example:
raw.qemu.conf: |- [device "qemu_gpu"] driver = ""
To add a new section, specify a section name that is not present in the configuration file.
The configuration file format used by QEMU allows multiple sections with the same name. Here’s a piece of the configuration generated by Incus:
[global]
driver = "ICH9-LPC"
property = "disable_s3"
value = "1"
[global]
driver = "ICH9-LPC"
property = "disable_s4"
value = "1"
To specify which section to override, specify an index. For example:
raw.qemu.conf: |-
[global][1]
value = "0"
Section indexes start at 0 (which is the default value when not specified), so the above example would generate the following configuration:
[global]
driver = "ICH9-LPC"
property = "disable_s3"
value = "1"
[global]
driver = "ICH9-LPC"
property = "disable_s4"
value = "0"
Security policies¶
The following instance options control the Security policies of the instance:
Key: | security.agent.metrics |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | virtual machine |
Key: | security.csm |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | virtual machine |
When enabling this option, set security.secureboot
to false
.
Key: | security.guestapi |
Type: | bool |
Default: |
|
Live update: | no |
See Communication between instance and host for more information.
Key: | security.guestapi.images |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | container |
Key: | security.idmap.base |
Type: | integer |
Live update: | no |
Condition: | unprivileged container |
Setting this option overrides auto-detection.
Key: | security.idmap.isolated |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | unprivileged container |
If specified, the idmap used for this instance is unique among instances that have this option set.
Key: | security.idmap.size |
Type: | integer |
Live update: | no |
Condition: | unprivileged container |
Key: | security.nesting |
Type: | bool |
Default: |
|
Live update: | yes |
Condition: | container |
Key: | security.privileged |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | container |
Key: | security.protection.delete |
Type: | bool |
Default: |
|
Live update: | yes |
Key: | security.protection.shift |
Type: | bool |
Default: |
|
Live update: | yes |
Condition: | container |
Set this option to true
to prevent the instance’s file system from being UID/GID shifted on startup.
Key: | security.secureboot |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | virtual machine |
When disabling this option, consider enabling security.csm
.
Key: | security.sev |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | virtual machine |
Key: | security.sev.policy.es |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | virtual machine |
Key: | security.sev.session.data |
Type: | string |
Default: |
|
Live update: | no |
Condition: | virtual machine |
Key: | security.sev.session.dh |
Type: | string |
Default: |
|
Live update: | no |
Condition: | virtual machine |
Key: | security.syscalls.allow |
Type: | string |
Live update: | no |
Condition: | container |
A \n
-separated list of syscalls to allow.
This list must be mutually exclusive with security.syscalls.deny*
.
Key: | security.syscalls.deny |
Type: | string |
Live update: | no |
Condition: | container |
A \n
-separated list of syscalls to deny.
This list must be mutually exclusive with security.syscalls.allow
.
Key: | security.syscalls.deny_compat |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | container |
On x86_64
, this option controls whether to block compat_*
syscalls.
On other architectures, the option is ignored.
Key: | security.syscalls.deny_default |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | container |
Key: | security.syscalls.intercept.bpf |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | container |
Key: | security.syscalls.intercept.bpf.devices |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | container |
This option controls whether to allow BPF programs for the devices cgroup in the unified hierarchy to be loaded.
Key: | security.syscalls.intercept.mknod |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | container |
These system calls allow creation of a limited subset of char/block devices.
Key: | security.syscalls.intercept.mount |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | container |
Key: | security.syscalls.intercept.mount.allowed |
Type: | string |
Live update: | yes |
Condition: | container |
Specify a comma-separated list of file systems that are safe to mount for processes inside the instance.
Key: | security.syscalls.intercept.mount.fuse |
Type: | string |
Live update: | yes |
Condition: | container |
Specify the mounts of a given file system that should be redirected to their FUSE implementation (for example, ext4=fuse2fs
).
Key: | security.syscalls.intercept.mount.shift |
Type: | bool |
Default: |
|
Live update: | yes |
Condition: | container |
Key: | security.syscalls.intercept.sched_setcheduler |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | container |
This system call allows increasing process priority.
Key: | security.syscalls.intercept.setxattr |
Type: | bool |
Default: |
|
Live update: | no |
Condition: | container |
This system call allows setting a limited subset of restricted extended attributes.
Snapshot scheduling and configuration¶
The following instance options control the creation and expiry of instance snapshots:
Key: | snapshots.expiry |
Type: | string |
Live update: | no |
Specify an expression like 1M 2H 3d 4w 5m 6y
.
Key: | snapshots.pattern |
Type: | string |
Default: |
|
Live update: | no |
Specify a Pongo2 template string that represents the snapshot name. This template is used for scheduled snapshots and for unnamed snapshots.
See Automatic snapshot names for more information.
Key: | snapshots.schedule |
Type: | string |
Default: | empty |
Live update: | no |
Specify either a cron expression (<minute> <hour> <dom> <month> <dow>
), a comma-and-space-separated list of schedule aliases (@startup
, @hourly
, @daily
, @midnight
, @weekly
, @monthly
, @annually
, @yearly
), or leave empty to disable automatic snapshots.
Note that unlike most other configuration keys, this one must be comma-and-space-separated and not just comma-separated as cron expression can themselves contain commas.
Key: | snapshots.schedule.stopped |
Type: | bool |
Default: |
|
Live update: | no |
Automatic snapshot names¶
The snapshots.pattern
option takes a Pongo2 template string to format the snapshot name.
To add a time stamp to the snapshot name, use the Pongo2 context variable creation_date
.
Make sure to format the date in your template string to avoid forbidden characters in the snapshot name.
For example, set snapshots.pattern
to {{ creation_date|date:'2006-01-02_15-04-05' }}
to name the snapshots after their time of creation, down to the precision of a second.
Another way to avoid name collisions is to use the placeholder %d
in the pattern.
For the first snapshot, the placeholder is replaced with 0
.
For subsequent snapshots, the existing snapshot names are taken into account to find the highest number at the placeholder’s position.
This number is then incremented by one for the new name.
Volatile internal data¶
The following volatile keys are currently used internally by Incus to store internal data specific to an instance:
Key: | volatile.<name>.apply_quota |
Type: | string |
The disk quota is applied the next time the instance starts.
Key: | volatile.<name>.ceph_rbd |
Type: | string |
Key: | volatile.<name>.host_name |
Type: | string |
Key: | volatile.<name>.hwaddr |
Type: | string |
The network device MAC address is used when no hwaddr
property is set on the device itself.
Key: | volatile.<name>.last_state.created |
Type: | string |
Possible values are true
or false
.
Key: | volatile.<name>.last_state.hwaddr |
Type: | string |
The original MAC that was used when moving a physical device into an instance.
Key: | volatile.<name>.last_state.ip_addresses |
Type: | string |
Comma-separated list of the last used IP addresses of the network device.
Key: | volatile.<name>.last_state.mtu |
Type: | string |
The original MTU that was used when moving a physical device into an instance.
Key: | volatile.<name>.last_state.pci.driver |
Type: | string |
The original host driver for the PCI device.
Key: | volatile.<name>.last_state.pci.parent |
Type: | string |
The parent host device used when allocating a PCI device to an instance.
Key: | volatile.<name>.last_state.pci.slot.name |
Type: | string |
The parent host device PCI slot name.
Key: | volatile.<name>.last_state.usb.bus |
Type: | string |
The original USB bus address.
Key: | volatile.<name>.last_state.usb.device |
Type: | string |
The original USB device identifier.
Key: | volatile.<name>.last_state.vdpa.name |
Type: | string |
The VDPA device name used when moving a VDPA device file descriptor into an instance.
Key: | volatile.<name>.last_state.vf.hwaddr |
Type: | string |
The original MAC used when moving a VF into an instance.
Key: | volatile.<name>.last_state.vf.id |
Type: | string |
The ID used when moving a VF into an instance.
Key: | volatile.<name>.last_state.vf.parent |
Type: | string |
The parent host device used when allocating a VF into an instance.
Key: | volatile.<name>.last_state.vf.spoofcheck |
Type: | string |
The original spoof check setting used when moving a VF into an instance.
Key: | volatile.<name>.last_state.vf.vlan |
Type: | string |
The original VLAN used when moving a VF into an instance.
Key: | volatile.<name>.mig.uuid |
Type: | string |
The NVIDIA MIG instance UUID.
Key: | volatile.<name>.name |
Type: | string |
The network interface name inside of the instance when no name
property is set on the device itself.
Key: | volatile.<name>.vgpu.uuid |
Type: | string |
The NVIDIA virtual GPU instance UUID.
Key: | volatile.apply_nvram |
Type: | bool |
Key: | volatile.apply_template |
Type: | string |
The template with the given name is triggered upon next startup.
Key: | volatile.base_image |
Type: | string |
The hash of the image that the instance was created from (empty if the instance was not created from an image).
Key: | volatile.cloud_init.instance-id |
Type: | string |
Key: | volatile.cluster.group |
Type: | string |
The cluster group(s) that the instance was restricted to at creation time. This is used during re-scheduling events like an evacuation to keep the instance within the requested set.
Key: | volatile.container.oci |
Type: | bool |
Default: |
|
Key: | volatile.cpu.nodes |
Type: | string |
The NUMA node that was selected for the instance.
Key: | volatile.evacuate.origin |
Type: | string |
The cluster member that the instance lived on before evacuation.
Key: | volatile.idmap.base |
Type: | integer |
Key: | volatile.idmap.current |
Type: | string |
Key: | volatile.idmap.next |
Type: | string |
Key: | volatile.last_state.idmap |
Type: | string |
Key: | volatile.last_state.power |
Type: | string |
Key: | volatile.last_state.ready |
Type: | string |
Key: | volatile.uuid |
Type: | string |
The instance UUID is globally unique across all servers and projects.
Key: | volatile.uuid.generation |
Type: | string |
The instance generation UUID changes whenever the instance’s place in time moves backwards. It is globally unique across all servers and projects.
Note
Volatile keys cannot be set by the user.