Server status
You are connected over: ()The demo server is currently running user sessions out of . The demo service is currently down for maintenance and should be back online in a few minutes. Your browser couldn't reach the demo server.
This is either (most likely) because of a firewall or proxy issue on your side or because of a network, power or other catastrophic server side failure.
Terms of service
Start
Starting the container
- Introduction
- Your first containers
- Inspect the containers
- Stop and delete containers
- Instance configuration
- Interact with a container
- Access files from the container
- Snapshots
- Conclusion
You are now root inside a LXD container with a nested LXD installed inside it.
Initial startup can take a few seconds due to having to generate SSL keys on a rather busy system. Further commands should then be near instantaneous.
To get started, follow this step-by-step tutorial that will guide you through LXD's main features. Or just poke around and discover LXD through its manpage and --help
option!
Tip
Click on any of the commands in the tutorial to copy it into the terminal.
LXD is image based and can load images from different image servers. In this tutorial, we will use the images: server.
You can list all images that are available on this server with:
lxc image list images:
Start by launching a few containers.
- Launch a container called "first" using the Ubuntu 20.04 image:
lxc launch images:ubuntu/20.04 first
Note that launching this container takes a few seconds, because the image must be downloaded and unpacked first. - Launch a container called "second" using the same image:
lxc launch images:ubuntu/20.04 second
Launching this container is quicker than launching the first, because the image is already available. - Copy the first container into a container called "third":
lxc copy first third
- Launch a container called "alpine" using the Alpine Edge image:
lxc launch images:alpine/edge alpine
Check the list of containers that you launched:
lxc list
You will see that all but the third container are running. This is because you created the third container by copying the first, but you didn't start it.
You can start the third container with:
lxc start third
You can query more information about each container with:
lxc info first lxc info second lxc info third lxc info alpine
We don't need all of these containers for the remainder of the tutorial, so let's clean some of them up.
- Stop the second container:
lxc stop second
- Delete the second container:
lxc delete second
- Delete the third container:
lxc delete third
Since this container is running, you get an error message that you must stop it first. Alternatively, you can force-delete it:lxc delete third --force
There are several limits and configuration options that you can set for your instances. See Instance configuration for an overview.
Let's create another container with some resource limits.
- Launch a container and limit it to one vCPU and 192 MiB of RAM:
lxc launch images:ubuntu/20.04 limited -c limits.cpu=1 -c limits.memory=192MiB
- Check the current configuration and compare it to the configuration of the first (unlimited) container:
lxc config show limited lxc config show first
- Check the amount of free and used memory on the parent system and on the two containers:
free -m lxc exec first -- free -m lxc exec limited -- free -m
Note that the total amount of memory is identical for the parent system and the first container, because by default, the container inherits the resources from its parent environment. The limited container, on the other hand, has only 192 MiB available. - Check the number of CPUs available on the parent system and on the two containers:
nproc lxc exec first -- nproc lxc exec limited -- nproc
Again, note that the number is identical for the parent system and the first container, but reduced for the limited container.
You can also update the configuration while your container is running.
- Configure a memory limit for your container:
lxc config set limited limits.memory=128MiB
- Check that the configuration has been applied:
lxc config show limited
- Check the amount of memory that is available to the container:
lxc exec limited -- free -m
Note that the number has changed.
Let's interact with your containers.
- Launch an interactive shell in your container:
lxc exec first -- bash
- Enter some commands, for example, display information about the operating system:
cat /etc/*release
- Exit the interactive shell:
exit
- Repeat the steps for your alpine container:
lxc exec alpine -- ash cat /etc/*release exit
- Instead of logging on to the container and running commands there, you can run commands directly from the host. For example, you can install a command line tool on the container and run it:
lxc exec first -- apt-get update lxc exec first -- apt-get install sl -y lxc exec first -- /usr/games/sl
You can access the files from your container and interact with them.
- Pull a file from the container:
lxc file pull first/etc/hosts .
- Add an entry to the file:
echo "1.2.3.4 my-example" >> hosts
- Push the file back to the container:
lxc file push hosts first/etc/hosts
- Use the same mechanism to access log files:
lxc file pull first/var/log/syslog - | less q
LXD supports creating and restoring container snapshots.
- Create a snapshot called "clean":
lxc snapshot first clean
- Confirm that the snapshot has been created:
lxc list first lxc info first
lxc list
shows the number of snapshots.lxc info
displays information about each snapshot. - Break the container:
lxc exec first -- rm -Rf /etc /usr
- Confirm the breakage:
lxc exec first -- bash
Note that you do not get a shell, because you deleted thebash
command. - Restore the container to the snapshotted state:
lxc restore first clean
- Confirm that everything is back to normal:
lxc exec first -- bash exit
- Delete the snapshot:
lxc delete first/clean
We hope this gave you a good introduction to LXD, its capabilities and how easy it is to use.
You're welcome to use the demo service as long as you want to try LXD and play with the latest features.
Enjoy!
The server is currently full, please try again in a few minutes.
You have reached the maximum number of concurrent sessions, please wait for some to expire before starting more of them.
You have been banned from this service due to a failure to respect the terms of service.
An unknown error occurred. Please try again in a few minutes.
The container you're trying to connect to doesn't exist anymore.