Contents


Installation

Choose your release

LXD upstream maintains three release branches in parallel:

  • LTS release (LXD 4.0.x, LXD 3.0.x or LXD 2.0.x)
  • Feature releases (LXD 4.x)

LTS releases are recommended for production environments as they will benefit from regular bugfix and security updates but will not see new features added or any kind of behavioral change.

To get all the latest features and monthly updates to LXD, use the feature release branch instead.

Getting the packages

Linux

Alpine Linux

To install the feature branch of LXD, run:

apk add lxd

Arch Linux

To install the feature branch of LXD, run:

pacman -S lxd

Alternatively, the snap package can also be used on Arch Linux (see below).

Fedora

Instructions on how to use the COPR repository for LXD can be found here.

Alternatively, the snap package can also be used on Fedora (see below).

Gentoo

To install the feature branch of LXD, run:

emerge --ask lxd

Ubuntu

Ubuntu (all releases)

The recommended way to install LXD these days is with the snap.

For the latest stable release, use:

snap install lxd

For the LXD 4.0 stable release, use:

snap install lxd --channel=4.0/stable

For the LXD 3.0 stable release, use:

snap install lxd --channel=3.0/stable

For the LXD 2.0 stable release, use:

snap install lxd --channel=2.0/stable

If you previously had the LXD deb package installed, you can migrate all your existing data over with:

lxd.migrate
Ubuntu 14.04 LTS (LXD 2.0 deb)

To install the LTS branch of LXD, run:

apt install -t trusty-backports lxd lxd-client
Ubuntu 16.04 LTS (LXD 3.0 deb)

To install the LTS branch of LXD, run:

apt install lxd lxd-client

To install the feature branch of LXD, run:

apt install -t xenial-backports lxd lxd-client

Snap package (Arch Linux, Debian, Fedora, OpenSUSE and Ubuntu)

LXD upstream publishes and tests a snap package which works for a number of Linux distributions.

The list of Linux distributions we currently test our snap for can be found here.

For those distributions, you should first install snapd using those instructions.

After that, you can install LXD with:

snap install lxd

Alternatively, pass:
--channel=4.0/stable for the LXD 4.0 LTS release,
--channel=3.0/stable for the LXD 3.0 LTS release or
--channel=2.0/stable for the LXD 2.0 LTS release.

MacOS builds

Note:

The builds for MacOS only include the client, not the server.

LXD upstream publishes builds of the LXD client for macOS through Homebrew.

To install the feature branch of LXD, run:

brew install lxc

Windows builds

Note:

The builds for Windows only include the client, not the server.

Native builds of the LXD client for Windows can be found here.

Installing from source

Instructions on building and installing LXD from source can be found here.

Initial configuration

Note:

instances : means both containers and virtual machines.

Before you can create an instance, you need to configure LXD.

Run the following as root:

lxd init

Overview of the configuration options:

default=no : means the feature is disabled by default

Feature: Description: Basic Configuration Options: More Information:
Clustering A Cluster combines several LXD-servers. They share the same distributed database and can be managed uniformly using the LXD-client (lxc) or the REST API. default=no;
If set to yes, you can either connect to an existing cluster or create a new one.
LXD-documentation:
clustering
MAAS server "MAAS is an open-source tool that lets you build a data centre from bare-metal servers." default=no;
If set to yes, you can connect to an existing MAAS-server and specify the name, URL and API key.
- maas.io
- maas - install with lxd
Network bridge Provides network access for the instances. You can either use an existing bridge (or interface) or let LXD create a new bridge (recommended option).
You can also create additional bridges and assign them to instances later.
LXD-documentation:
- networks
- Network interface
Storage pools Instances etc. are stored in storage pools. For testing purposes you can create a loop-backed storage pool.
But for production use it is recommended to use an empty partition (or full disk) instead of loop-backed storages (Reasons include: loop-backed pools are slower and their size can't be reduced).
The recommended backends are ZFS and btrfs.
You can also create additional storage pools later.
LXD-documentation:
- storage
- Comparison of methods
- Backend Comparison Chart
Network Access Allows access to the server over network. default=no;
If set to yes, you can connect to the server over network.
You can set a password or accept the client certificate manually.
-
Automatic Image Update You can download Images from Image servers, in this case images can be updated automatically. default=yes;
If set to yes, LXD will update the downloaded images regularly.
LXD-documentation:
image-handling
"YAML lxd init preseed" Will display a summary of your chosen configuration options in the terminal. default=no -

Access control

Access control for LXD is based on group membership. The root user as well as members of the "lxd" group can interact with the local daemon.

If the "lxd" group is missing on your system, create it, then restart the LXD daemon. You can then add trusted users to it. Anyone added to this group will have full control over LXD.

Because group membership is normally only applied at login, you may need to either re-open your user session or use the "newgrp lxd" command in the shell you're going to use to talk to LXD.

Warning:

Anyone with access to the LXD socket can fully control LXD, which includes the ability to attach host devices and filesystems, this should therefore only be given to users who would be trusted with root access to the host. You can learn more about LXD security here.

Note about Virtual machines

Since version 4.0 LXD also natively supports virtual machines and thanks to a built-in agent, they can be used almost like containers.

LXD uses qemu to provide the VM functionality.

See below for how to start a virtual machine.

You can find more information about virtual machines in our forum1.

Note:

For now virtual machines support less features than containers.
See Advanced Guide - Instance configuration for details.

LXD-Client

The LXD-client lxc is a command tool to manage your LXD servers.

Overview

The following command will give you an overview of all available commands and options:

lxc

Launch an instance

You can launch an instance with command lxc launch:

Launch a container
lxc launch imageserver:imagename instancename
Launch a virtual machine
lxc launch imageserver:imagename instancename --vm

Replace:

  • imageserver with the name of a built-in or added image server (e.g. ubuntu or images) and
  • imagename with the name of an image (e.g. 20.04 or debian/11). See Section "Images" for details.
  • instancename with a name of your choice (e.g. ubuntuone), if left empty LXD will pick a random name.

Example for Ubuntu

lxc launch ubuntu:20.04 ubuntuone

this will create a container based on the Ubuntu Focal Fossa Image (provided by LXD) with the instancename ubuntuone.

Configuration of instances

See Advanced Guide - Instance Configuration.

Images

Instances are based on Images, which contain a basic operating system (for example a linux distribution) and some other LXD-related information.

In the following we will use the built-in remote image servers (see below).

For more options see Advanced Guide - Advanced options for Images.

Use remote image servers

The easiest way is to use a built-in remote image server.

You can get a list of built-in image servers with:

lxc remote list

LXD comes with 3 default servers:

  1. ubuntu: (for stable Ubuntu images)
  2. ubuntu-daily: (for daily Ubuntu images)
  3. images: (for a bunch of other distros)

List images on server

To get a list of remote images on server images, type:

lxc image list images:

Details:
Most details in the list should be self-explanatory.

Search for images

You can search for images, by applying specific elements (e.g. the name of a distribution).

Show all Debian images:

lxc image list images: debian

Show all 64-bit Debian images:

lxc image list images: debian amd64

Images for Virtual Machines

It is recommended to use the cloud variants of images (visible by the cloud-tag in their ALIAS).
They include cloud-init and the LXD-agent.
They also increase their size automatically and are tested daily.

Instance management

List all Instances:

lxc list

Start/Stop

Start an instance:

lxc start instancename

Stop an instance:

lxc stop instancename

Shell/Terminal inside Container

Get a shell inside a container:

lxc exec instancename -- /bin/bash

By default you are logged in as root:

root@containername:~#
To login as a user instead, run:

Note: In many containers you need to create a user first.

lxc exec instancename -- su --login username

Exit the container shell, with:

root@containername:~# exit

Run Command from Host terminal

Run a single command from host's terminal:

lxc exec containername -- apt-get update

Shell/Terminal inside Virtual Machine

You can see your VM boot with:

lxc console instancename

(detach with ctrl+a-q)

Once booted, VMs with the LXD-agent built-in, can be accessed with:

lxc exec instancename bash

Exit the VM shell, with:

exit

Copy files and folders between container and host

Copy from an instance to host

Pull a file with:

lxc file pull instancename/path-in-container path-on-host

Pull a folder with:

lxc file pull -r instancename/path-in-container path-on-host

For example:

lxc file pull instancename/etc/hosts .
Copy from host to instance

Push a file with:

lxc file push path-on-host instancename/path-in-container

Push a folder with:

lxc file push -r path-on-host instancename/path-in-container

Remove instance

Warning:

This will delete the instance including all snapshots.
Deletion will be final in most cases and restore is unlikely!
See Tips & Tricks in Advanced Guide on how to avoid accidental deletion.

Use:

lxc delete instancename

Further Information & Links

You find more information on the following pages:


  1. Running virtual machines with lxd, including a short howto for a Microsoft Windows VM.