Miona Aleksic
on 8 May 2023
Spring news from the LXD team
In addition to having an LTS release every two years (following the Ubuntu release cadence), LXD also has monthly feature releases. While LTS releases are recommended for production environments, monthly releases are the ones to use in order to get access to the latest features our team is continuously working on. Monthly releases are available in the default snap channel, while the LTS releases are available on the stable channel (–channel=5.0/stable). More information on how to access different snap channels and how to manage the LXD snap is available in the LXD forum.
In this blog, we will go through some of the most significant features we have included in our monthly releases so far this year. Links to each of the release announcements are linked for detailed information.
LXD 5.10 – Instance and server documentation improvements; Network charts in Grafana
As the first release following the holidays, LXD 5.10 was light on features in comparison to regular monthly releases.
- It included restructuring the Instance and server documentation sections, breaking them down into subsections that are easier to navigate and link to.
- The Grafana dashboard was expanded to cover network usage with four new charts covering top transmit traffic, top receive traffic as well as top transmit packets and top receive packages.
- A new configuration key was added allowing for increasing or decreasing network transmit queue length on NIC devices
Read the full release announcement for more details, or watch the release live stream.
LXD 5.11 – Instance placement scriptlet and block storage mode on ZFS pools
LXD 5.11 includes several feature highlights, as well as performance improvements and bug fixes
- Instance placement scriptlet was added enabling a better alternative to LXD’s default placement algorithms. Instead of the default behaviour of placing a new instance to whatever node in the cluster hosting the fewest instances, this feature allows users to make a more deliberate choice. Now users can provide a Starlark scriptlet that decides on a target cluster based on information about the new requested instance as well as a list of candidate members. Importantly, while scriptlets are able to access certain information about the instance and the cluster, they cannot access any local data, hit the network or even perform complex time-consuming actions. Read more about it in the specification.
- We included support for ZFS volumes (Zvol), in addition to the ZFS filesystem we’ve had for a long time. This is something that was requested by the community and is finally available to users. It results in an experience that’s very similar to LVM or Ceph but on the very capable ZFS backend. It can also be used to mix and match, having some specific custom volumes use Zvol while the rest of the data use datasets.
For the rest of the features and a complete changelog, please check the 5.11 release announcement, or this youtube video for demos of the features.
LXD 5.12 – Fixes related to storage and instance migration
Rather than big features, the 5.12 release contains a lot of smaller fixes, especially around storage and instance migration.
- It’s now possible to instruct LXD to wipe the source device of a storage pool prior to creation. While needed for specific use cases, this should be used with extreme care as setting the wrong source value will cause the disk in question to have its header wiped clean by LXD.
- LXD now also implements VM generation IDs. This is purely an additional security feature that the guest OS may or may not use.
- A new disk configuration option has been added to enable control of the caching behaviour for the disk in virtual machines.
You can access the complete change log in the release announcement, or watch the video introducing the changes.
LXD 5.13 – Live VM migration, AMD SEV support for VMs, OpenID Connect authentication
5.13 is quite a jam-packed release featuring a lot of useful features, including many networking and VM improvements.
- This release enables a much-improved VM live migration process, eliminating any perceivable downtime. Previously, LXD relied on the stateful stop function, which is the ability to write down all the memory and CPU state to disk, then fully stop the virtual machine but with the ability to start it back up exactly where it left off. The improved functionality, on the other hand, allows the source and target servers to communicate right from the start of the migration. This allows for performing any disk state transfer in the background while the VM is still running, then transferring any remaining disk changes as well as the memory through multiple iterations of the migration logic and finally cutting over to the target system.
- LXD now supports AMD SEV for memory encryption of virtual machines. On compatible systems (AMD EPYC with firmware and kernel support enabled), setting security.sev to true will have the VM get its memory encrypted with a per-VM key handled by the firmware. Systems supporting AMD SEV-ES can then turn on security.sev.es to also have the CPU state encrypted for extra security.
- As a first step to providing a more industry-standard solution to authentication and authorisation in LXD, OpenID Connect can now be used for authentication. LXD uses the Device Code flow for authentication with our CLI tool triggering the browser-based authentication flow, then getting and storing the access and refresh tokens and providing those to LXD on all interactions. Only authentication is supported at this stage. Any user that’s approved by the OIDC Identity Provider configured in LXD will get full access to LXD, comparable to that of being in the lxd group.
- This release adds VDPA for network acceleration on OVN. In addition to SR-IOV-based accelerated NICs on OVN networks, users can now use VDPA acceleration as well. With VDPA, the guest doesn’t get to know what the physical NIC is. Instead, the guest sees a perfectly normal virtio-net device, the same as non-accelerated networking. Behind the scenes, that virtio-net device actually has its RX/TX queues mapped to a VF which is then connected into OpenVswitch and OVN the same way as would be done for SR-IOV. Now, no drivers are needed in the guest and the NIC can theoretically be remapped to a standard non-accelerated virtio-net device prior to migration, allowing for live migration.
- Several other networking improvements have been made, including layer 3 support on OVN, nested NIC support on OVN, as well as per user bridge in multi-user setups.
For a detailed explanation of each of these features please refer to the announcement, or watch this video to see them in action.