Using LXD and KVM on the Same Host

LXD and KVM are hypervisors that are able to work simultaneously in the same host due to the different nature of the virtual instances which they create. KVM creates virtual machines using full virtualization while LXD creates a virtual run-time environment on top of the OS kernel – the former one requiring CPU support and the latter only a suitable kernel.

OpenNebula 5.8 has virtualization drivers for both LXD and KVM, and despite both being able to peacefully coexist, the current architecture treats every virtualization node as a unique hypervisor. When a VM is queued on the scheduler, it is then deployed on a suitable host, and the driver used to deploy the VM is determined by the type of hypervisor of the host. At the moment, there is an open issue describing the situation, which coincidentally will require changes to several logical components of OpenNebula. Fortunately, it is possible to overcome the situation with a very simple “work-around”.

When a host is added to an OpenNebula frontend, it is required to enter a hostname that is associated with that host; that could either be its IP address or a name that resolves to that IP address. With that in mind, the frontend may refer to the host with several names. You can add several names in the DNS server the frontend or in the /etc/hosts file.

In this post we will create a LXD single server setup using miniONE and then we will add that same host as a KVM node.


Deploy the LXD node using miniONE. Note – there is a command line extra argument for the LXD flavor. Make sure you use a Ubuntu host since the LXD driver is only supported on Ubuntu distros.

Check the hosts.

root@LXDnKVM:~# onehost list
0 localhost default 0 0 / 200 (0%) 0K / 1.9G (0%) on
root@LXDnKVM:~# onehost show 0 | grep MAD
IM_MAD : lxd
VM_MAD : lxd

Add the kvm name to the /etc/hosts file. LXDnKVM  # one-contextd localhost kvm

Now the oneadmin user already is able to access via ssh to the host, but you need you make sure the kvm host is a known host.

oneadmin@LXDnKVM:~$ ssh kvm
The authenticity of host 'kvm (' can't be established.
ECDSA key fingerprint is SHA256:dwPyCUgSN38eh9kL2cn/l2PQ67aUVOjt37JVceLCbZ0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'kvm' (ECDSA) to the list of known hosts.

Now add the “new” host.

oneadmin@LXDnKVM:~$ onehost create kvm -v kvm -i kvm
oneadmin@LXDnKVM:~$ onehost list
1 kvm default 0 0 / 200 (0%) 0K / 1.9G (0%) on
0 localhost default 0 0 / 200 (0%) 0K / 1.9G (0%) on

Let’s create a VM and a container.

oneadmin@LXDnKVM:~$ onetemplate instantiate 0
VM ID: 0
oneadmin@LXDnKVM:~$ onevm list
0 oneadmin oneadmin CentOS 7 - KVM- runn 0 0K kvm 0d 00h00
oneadmin@LXDnKVM:~$ onetemplate instantiate 0
VM ID: 1
oneadmin@LXDnKVM:~$ onevm list
1 oneadmin oneadmin CentOS 7 - KVM- runn 0 0K localhost 0d 00h00
0 oneadmin oneadmin CentOS 7 - KVM- runn 0 0K kvm 0d 00h00
oneadmin@LXDnKVM:~$ virsh list
 Id    Name                           State
1     one-0                          running
oneadmin@LXDnKVM:~$ lxc list
| one-1 | RUNNING | (eth0) | | PERSISTENT | 0 |

Note that we instantiated the template twice and the scheduler created the 1st time as a VM and the 2nd time a container due to the last added host having resources allocated and the first one being empty.


  • You can tweak the capacity section of the hosts to plan resource allocation and implement the desired resource quota for each hypervisor.
  • You can create a LXD cluster out of your existing KVM cluster.
  • Since the LXD driver is able to deploy KVM images and KVM VM templates, make sure you specify on the templates where do you want the VM/container.

Hope this is helpful! Send your feedback.

August 2019 – OpenNebula Newsletter

Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.


August, in many circles, is the month of vacationing and “energy recharging” before returning back to the office. And that has likely been the case for many folks in the User Community, just as it has been for the OpenNebula Systems team. But that doesn’t mean that we have taken our foot off the gas on development. As we continue to work on the upcoming release of v.5.10, our latest focus has been on certain feature development.

One thing on which we have been actively working is the packaging of gem dependencies. The packaging for the next major version will be improved to cover all Ruby gem dependencies as dedicated operating system packages (rpm/deb). This will allow you to run OpenNebula from proposed and tested Ruby gem versions, isolated from the dependencies installed system-wide. And it will vastly speed up the installation and upgrade process!

We are also working on unified secrets handling. All secrets are encrypted and decrypted in ONEd. Since drivers get secrets decrypted, this will simplify the code within those drivers. We also have some active focus dedicated to custom scripts for mixed modes.


The activity and energy within the Community has continued to keep momentum throughout the month of August. For one, we just wrapped up an exciting week at the VMworld 2019 in San Francisco, CA. We, at OpenNebula Systems, were active exhibitors within the “Solutions Exchange”, where companies were able to showcase their offerings in the virtualization and cloud technology space.

We had the opportunity to showcase OpenNebula and vOneCloud to many new sets of eyes, while also being able to meet up with many current users, who stopped by to inquire about upcoming developments and to connect directly with members from the OpenNebula team. It was a busy week of demos and information sharing – and one very well spent. Don’t forget, we will also be in attendance at the VMworld 2019 event in Barcelona, Spain on November 4-7, 2019.

This month we were proud to announce that NTS is joining as a Sponsor to our upcoming OpenNebulaConf 2019. We are excited to have NTS not only help sponsor, but also to be a part of the exciting event to showcase OpenNebula from all angles.

We also saw several interesting contributions from the User Community in the Developers’ Forum. One exciting development came in the form of a “Helm chart for OpenNebula Control-plane“, from a community member Nico. It is a very interesting integration with Kubernetes, and for which Nico has engaged the community to help contribute. Hopefully, we will continue to see more details about this development, and the many more like it.


August has wrapped up, and the last few months of 2019 are rapidly approaching. As mentioned earlier, the OpenNebulaConf 2019 – our showcase event to highlight the exciting evolution OpenNebula is taking – will be held this year in Barcelona, Spain on October 21-22, 2019. This year’s event plans to build on previous years’ events, by offering not only a huge line-up of presentations from active users and champions of OpenNebula, along with a Hands-on Tutorial touching upon some Administration topics, but we are also including several Workshops and “hacking sessions” to be able to dig into all topics of interest. Check out the agenda, and register now, as the “Early Bird” pricing is wrapping up soon!

September is lining up to be a busy month, as well, as we have two OpenNebula TechDay’s scheduled. Remember, these TechDay events are great ways to connect with the User Community, share insights on use cases, and a perfect way to get FREE hands-on training. Check your schedules, and Register Now!

Let’s welcome September in! And stay connected!

Agenda of TechDay Frankfurt – 11SEP19

On September 11, 2019, we will be collaborating with Interactive Networks and EuroCloud Deutschland to hold an OpenNebula TechDay in Frankfurt, Germany.

OpenNebula TechDays provide a great opportunity to meet and share knowledge among cloud enthusiasts.

As usual we will have an OpenNebula hands-on tutorial in the morning and several talks in the afternoon by active members in the User Community, as well as by OpenNebula Systems developers.

Remember that TechDay events are FREE of charge. Check out the Agenda and Register Now!

We’ll see you in Frankfurt!

NTS – A Silver Sponsor of OpenNebulaConf 2019

The OpenNebulaConf 2019 is approaching quickly – scheduled for October 21-22, 2019 in Barcelona, Spain – and we are extremely excited to announce that NTS is a Silver Sponsor for the event!

Be sure to join us, along with NTS, in Barcelona for this great event.  We have a wonderful lineup of speakers, workshops, and hands-on tutorials for all audiences. The “Early Bird” registration is still open.

Check out for more details. We look forward to seeing you in Barcelona!

NTS “In Words”

RELAX, WE CARE: We take on the digital responsibility for our clients. In our actions, we combine clear thinking and structured processes with our passion for IT.

NTS Netzwerk Telekom Service AG headquartered in Raaba-Grambach near Graz was founded in 1995 by executive board members Alexander Albler and Hermann Koller. Currently more than 338 staff are employed at the locations Graz, Klagenfurt, Vienna, Linz, Salzburg, Innsbruck, Dornbirn, Friedrichshafen, Rosenheim, Leipzig and Bolzano. In 2018 our revenue exceeded EUR 120 million.

About NTS

No matter where you are on your way into the Cloud, NTS as a professional consultant will be able to make the right choice for your Cloud strategies! We gladly support you with our expertise when implementing Cloud strategies and we offer comprehensive advice along the entire value chain. We develop individual Cloud strategies and by using “Cloud methodology” synergies are created that make our customers more powerful; thanks to a versatile IT infrastructure on-premises in the private Cloud or in the public Cloud.

About NTS Captain (Cloud Automation Platform)

In conventional IT departments, workload and complexity are constantly on the increase. However, the respective IT resources are not growing at the same pace. As a result, very often problems such as inefficiency, long waiting times, missing standards and a decentralized management occur. Our new product NTS Captain enables IT departments to present itself as an internal service provider and thus to deal with queries in a fast and efficient way.

With the help of NTS Captain, NTS customers are changing their IT organizations into agile internal infrastructure providers which deliver answers to new challenges such as DevOps. In this way, customer have a much tighter grip on their IT. NTS Captain is based on OpenNebula and can be integrated into the existing VMware environment as a self-service platform without any issues.

About NTS Managed Private Cloud  (NTS Captain / OpenNebula included)

We provide our customers with a standardized and highly available infrastructure as a service (IaaS). Each customer receives a dedicated and physically segregated environment that comes with a specially tailored comprehensive carefree package. We take care of it – ranging from the implementation and the migration planning all the way to a backup that is entirely managed. To guarantee a service availability of 99.99%, NTS experts monitor the system 24×7 and thus ensure a smooth-running operation. Not to mention a simple billing model with a maximum of flexibility that come with it.

  • No investment in hardware and no buildup of know-how required
  • 100% demand-based billing
  • Optional in-house hardware (no load on WAN connections)
  • Guaranteed service level agreements
  • Flexibility and scalability
  • Self-service with NTS CAPTAIN (OpenNebula)
  • Inclusive of back-up planning and management

July 2019 – OpenNebula Newsletter

Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.


July has come and gone. Sure, it has been hot, but most of the sweat shed this month has come from the really exciting work that we have all been doing here and across the User Community.

Our steady focus on bringing simple, integrated cloud deployment and management to the Edge took another leap ahead in July with our AWS IoT Greengrass deployment. We published a detailed blog post and a video screencast of the entire exercise, outlining just how easy it is to automatically deploy an AWS IoT Greengrass environment, taking advantage of the latest DDC features, as well as utilizing the new AWS Greengrass appliance that we released in the OpenNebula Marketplace

The 5.8.4 HotFix version was released earlier this month. While HotFix releases are available exclusively to OpenNebula Systems Support subscribers, it is helpful to know that issues and new features are continuously being worked on and introduced into the OpenNebula releases.

The OpenNebula Systems team has also been working on dedicated VMware integration capabilities. Right now, NSX-T support for Opaque Networks and NSX-V support for Logical Switches are in “beta” mode and will be made available very soon. We are actively working on other NSX-V features like Security Groups and Distributed Firewalls (DFW).

We also shared some important details about Using LXD on CentOS Hosts with Nested Virtualization, as well as, some helpful documentation on how to best take advantage of miniONE.


A very constructive contribution from the Community came this month from Iguane Solutions, in the form of their Terraform provider, which was recently added to the OpenNebula Add-on Catalog. Iguane Solutions and OpenNebula Systems are working directly with Hashicorp to establish this provider as an “official provider” – to be included in the Hashicorp Official provider GitHub.

StorPool also published a detailed article “VM performance optimization for OpenNebula with KVM” in which they outline a collection of recommendations for improved performance and extending OpenNebula on KVM.


A lot of exciting events are on the brink of taking place in the world of OpenNebula.

From August 25-29, 2019, we will be in attendance at VMworld US in San Francisco, CA. OpenNebula will be at Booth #1669 of the Solutions Exchange, reviewing latest features of vOneCloud and OpenNebula, and providing demos of how to transform your VMware infrastructure into a multi-tenant cloud – drastically reducing complexity and cost. Here are details from an earlier post.

Our annual showcase event – OpenNebulaConf 2019 – is fast-approaching, and we published the exciting agenda with newly added “Hacking and Development Workshops“, in addition to Keynotes and Presentations from active members in the community, and the interactive Hands-on Tutorial. Don’t forget to register soon – before the “Early Bird Discount” ends!

And September will be bringing two OpenNebula TechDays, hosted by active users in the OpenNebula Community.

Remember – OpenNebula TechDays are open to everyone and the registration is FREE.

Stay connected!

Upcoming TechDay in Vienna – Hosted by NTS

On September 26, 2019, NTS will be hosting an OpenNebula TechDay in Vienna, Austria.

NTS has teamed up with NetApp, as well as with OpenNebula Systems, to pull together an impressive agenda of users and project collaborators to share their expertise and insight into top-notch OpenNebula usage. You will get to hear from and speak with users from NTS, NetApp, Haufe Lexware, TeleData, LINBIT, EURAC Research, and OpenNebula Systems. With this line-up, we are making the exception to forego the Hands-on Tutorial and have the opportunity to get a full day’s worth of knowledge sharing and rich, active discussion.

Meet up with active users in the community, bring your questions and your curiosity. Don’t miss out on this exciting FREE OpenNebula TechDay!

Check out the Vienna TechDay Portal.

vOneCloud / OpenNebula at VMworld US 2019 in San Francisco

This coming August 25-29, 2019, OpenNebula Systems will be in San Francisco, CA at the VMworld US 2019 event. This is a “must-attend” event in the world of Virtualization and Cloud Computing, riddled with industry experts. And we will be stationed in the midst of the “Solutions Exchange”, prepped and excited to showcase our newly released vOneCloud 3.4.1 – the open-source replacement for VMware vCloud. You’ll be able to see a live demo of how a VMware-based infrastructure can be turned into a cloud with a fully functional self-service portal in 5 minutes!

We’ll also be sharing some of our latest developments and insights into what’s in store for upcoming releases of vOneCloud 3.6 and OpenNebula 5.10. Here’s a quick peek:

OpenNebula 5.10 and vOneCloud 3.6 will include support for VMware NSX-V, comprised of main features like:

  • Logical switches
  • Distributed Firewall (DFW)
  • Security Groups (SG)

And as a first step toward the integration with VMware NSX-T, Opaque Networks will be supported, allowing:

  •  Creation / Delete / Import / Attach / Detach in NSX-T environments
  •  Import / Attach / Detach in VMC on AWS environments.

You’ll soon be able to work with logical switches to Create, Delete, Modify, Attach and Detach virtualwires through OpenNebula to its instances and templates. Also, microsegmentation will be supported thanks to working in conjunction with Distributed Firewall and Security Groups, enabling the next level of security even for servers on the same network.

Another improvement in the VMware integration will be support for VMRC through Sunstone using HTML5. This will avoid the need to connect directly from the client browser to the ESX running the specific VM to gain console access. Full support for all disk buses (SATA, IDE, SCSI) will be added, as well. And vCenter Driver refactoring, better error handling and performance improvements are all lined up.

If you are planning on attending, don’t forget to register. And make sure to stop by and catch up with us at Booth # 1669.

vOneCloud 3.4.1 Released!

We want to let you know that we have just announced the availability of vOneCloud version 3.4.1.

vOneCloud v.3.4.1. is based on OpenNebula 5.8.4, and as such, it includes all the bug fixes and functionalities introduced in 5.8.4: OpenNebula 5.8.4 Release Notes.

vOneCloud 3.4.1 is a maintenance release with the following minor improvements:

  • Add timepicker in relative scheduled actions.
  • Check vCenter cluster health in monitoring.
  • Added sunstone banner official support.
  • New quotas for VMS allow to configure limits for VMs “running”.
  • The Virtual Machines that are associated to a Virtual Router have all actions allowed except nic-attach/dettach.
  • Implement retry on vCenter driver actions.
  • Allow FILES in vCenter context.
  • noVNC updated to v1.0.
  • Centralized credentials for vCenter resources.
  • Enhance vCenter driver actions pool calls.
  • Read driver action on attach_disk using STDIN for vcenter drivers.
  • Manage IPs when a VM is imported from vCenter.
  • Show if vCenter cluster have activated DRS and/or HA.

Also 3.4.1 features the following bugfixes:

  • Fix bug in vcenter_downloader failing to download vcenter images.
  • Fix an issue so hourly sched action executes just one time.
  • Fix missing wait_for_completion in some vCenter async tasks
  • Fix and issue which broke the new endlines from templates.
  • Fix issue that does not show the datastores table when a new VM template is downloaded.
  • Fix misleading non persistent message in instantiate message
  • Fix an issue when importing vCenter hosts and an OpenNebula cluster with the same name exists.
  • Fix wild VM import process to not default to host 0.
  • Fix missing DEPLOY_ID when import Wilds VM.
  • Reduce the amount of database disk space generated by VM search indexes.
  • Fix VM can’t boot due to invalid cdrom config.
  • Fix Updating vCenter VM disk image in Sunstone does not remove OPENNEBULA_MANAGED attribute.
  • Fix snap on vCenter to remove not affected disks.
  • Fix list of virtual routers shown in virtual networks to follow user access permissions.
  • Fix issue add persistent image via sunstone.
  • Fix shutdown doesn’t check VM status in vcenter.
  • Add IP6_LINK and IP6_GLOBAL attributes to VM short body.
  • Fix lock VM highlight in Sunstone.
  • Fix container status inconsistency during boot.
  • Fix an issue that prevents admin to change other permissions when ENABLE_OTHER_PERMISSIONS=NO.
  • Put right drivers when importing market app to vCenter DS.
  • Improve error messages on vcenter deploy..
  • Fix error when try delete image in vCenter using NFS.
  • Added basic support to NSX opaque networks..
  • Clean onevcenter tool error messages.
  • Fix create a VM group without roles in Sunstone.
  • Fix disappear rm_ar_button.

vOneCloud 3.4.1 has been certified with support for vSphere 6.0, 6.5 and 6.7.

Relevant Links

Terraform Provider for OpenNebula in the Add-on Catalog

Iguane Solutions has made an exciting, new contribution to the OpenNebula Add-on Catalog with a Terraform Provider for OpenNebula

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently, as it can manage existing and popular service providers as well as custom in-house solutions.

This OpenNebula provider will help people who need to deploy a platform composed of several VMs with their respective disks, connected via several networks. With a simple text file, the DevOps engineer describes its platform using the HCL syntax (HCL is a kind of JSON format developed by Hashicorp) and then it executes Terraform to deploy it.

In a Terraform file you must declare:

  • How to connect to the target OpenNebula Endpoint in the provider configuration
  • A variable set you need to use
  • All resources you want to create and deploy 
  • Output information you are interested to get at the end of the deployment.

This project has been inspired by previous OpenNebula contributions from Runtastic and BlackBerry. And we, at Iguane Solutions, are looking to contribute and manage the Add-on as an official provider, to ensure that it offers the community with a robust and effective set of functionalities. We invite members from the User Community to contribute to its evolution, so that we can keep it growing to meet the needs of the community, and avoid creating multiple variations of the same. 

Using LXD on CentOS Hosts with Nested Virtualization

With the release of OpenNebula 5.8, official support for LXD containers was added to the driver stack. It is possible now to create containers with a very flexible approach, using regular VM images and apps from both OpenNebula and marketplaces, etc. 

However, there is a strong limitation to the support of virtualization nodes. Currently, there is only support for the setup of an LXD node with Ubuntu >= 18.04. That could impose a limitation for users who prefer/have Debian or CentOS. The good news is that limitation may be overcome by using full virtualization with the KVM driver.

In this post we are going to create a single node opennebula setup. That host will have CentOS7 installed and will act as a regular KVM node. We will create an Ubuntu 18.04 VM, in the same network as the frontend, and then we’ll add that VM as an LXD node.

This nested virtualization approach renders additional benefits because of the complete decoupling of the containers from the hypervisor layer, namely:

  • Increased Security
  • Live-migration of the container workload
  • The ability to use any infrastructure no matter the OS or its location (private or public).

Let’s get started.

Create the environment

  • Get yourself a CentOS 7 server
  • Deploy OpenNebula there, miniONE is the perfect tool to do so.

Prepare the LXD VM

Import Ubuntu 18.04 – KVM from the OpenNebula Marketplace 

[root@LXDoCentOS ~]# onemarketapp list | grep 18.04
  15 Ubuntu 18.04 - KVM        5.8.0-1.20  2.2G  rdy  img 02/26/19 OpenNebula Public       0
  1 Ubuntu 18.04 - EC2        5.8.0-1.20    0M  rdy  tpl 02/26/19 OpenNebula Public       0
[root@LXDoCentOS ~]# onemarketapp export 15 'lxd_node' -d 1
    ID: 1
    ID: 1

Add the default NAT miniONE network to the VM template.

[root@LXDoCentOS ~]# onetemplate update 1

Then add


It would be a good idea, when setting a long-term LXD host, to set a fixed IP address for the VM.

Make the image persistent

[root@LXDoCentOS ~]# oneimage persistent 1

Now deploy the VM and it is a matter of setting up an LXD node. You can easily follow the steps described in the documentation on the newly created VM. You can add the host with its IP address or create an associated name in the /etc/hosts file.

Starting some LXD containers

The VM 2 will be our LXD node created in KVM. You need to add it within the Host tab. Simply create a new host of type LXD with the VM IP as its name. In this case, the host 0 is the default one created with miniONE, and 1 is our virtualized LXD node.

Now we are ready to start a new container as usual. In the image below, the VM 9 is an alpine container running inside the VM 2, our virtualized LXD node.

You can log into the LXD node to check our container. It may be a good idea to lock the LXD VM in order to prevent mistakes when using Sunstone.

It is as simple as that. If you have questions or comments, don’t hesitate to bring them up in our Developers’ Forum.