Hotfix Release v.5.10.4 is Available!

A hotfix release is a type of incremental release that fixes specific issues. While OpenNebula is fully open source, packages from hotfix incremental versions are not publicly released, and are only available for users with an active support subscription. However, rest assured that the code is publicly available in the GitHub repository, as are the templates to create packages for the different supported platforms.

The following new features has been backported to 5.10.4:

  • Pyone, the Python API binding for OpenNebula, is now thread-safe.
  • Support for volatile disks on LXD.
  • Improve CLI filter operators handling.

The following issues have been solved in 5.10.4:

  • Fix default encoding when table encoding is not detected.
  • Fix Graphics when update VM template in Sunstone.
  • Fix Scheduling when update VM template in Sunstone.
  • Fix error messages when using onedb update-body.
  • Fix error in fsck when vnet lease has no ID.
  • Fix VMs & Images datatables in Sunstone.
  • Fix labels when service updating in Sunstone.
  • Fix ACLs check permissions when creating a template.
  • Fix create group with no permissions.
  • Fix NIC aliases are not working with NETWORK_SELECT = “NO”.
  • Fix uid and gid of new VMs when scaling a service.
  • Fix scheduler action are not working with END_TYPE = 0.
  • Fix address range dialog when instantiate a VNet.
  • Fix display Roles in Service.
  • Fix installing augeas gem in Debians.
  • Fix required IPv4 when IPAM driver is selected.
  • Do not allow user to increase his privileages to manage VMs.
  • Do not allow wrong string in VM_*_OPERATIONS attribute.
  • Fix problem with unmanage nics at deploy time.
  • Add VCENTER_TEMPLATE_NAME attribute in vCenter templates.
  • Fix vCenter information attributes to show correct icons if they can be modified or deleted.
  • Fix vCenter templates adding VCENTER_TEMPLATE_NAME attribute.
  • Fix MariaDB/MySQL version detection.
  • Fix template context variables on instantiation.

Everyone can create their own packages or build OpenNebula from the source code, but only OpenNebula Systems customers have the convenience of pre-created packages for hotfix incremental releases. If you are an OpenNebula Systems customer with an active support subscription, you have immediate access to this hotfix packages. Please check your private repository at

Relevant Links

OpenNebula Joins the Intel Network Builders “Edge Ecosystem”

OpenNebula and Intel join forces to accelerate the development of network edge solutions in the market.

As we continue to develop simple and flexible solutions to meet the massively growing needs for low-latency and distributed, lightweight compute power, OpenNebula’s “Elastic Private Cloud” and its native capability for integrating bare-metal geo-distributed resources, on demand, now provide easily-available technological offerings and building blocks within the Intel Network Builders Edge Ecosystem.

With an ever-growing community of edge resource providers, along with OpenNebula’s nimble provisioning and cloud elasticity capabilities, our Open Source Edge Cloud is lining up perfectly for the evolving demands at the edge.

Check out the rest of our OpenNebula Partners within the OpenNebula Partner Ecosystem.

OpenNebula + Firecracker: Building the Future of On-Premises Serverless Computing

Firecracker is a new hypervisor⁠—widely used by AWS as part of its Fargate and Lambda services⁠—especially designed for creating and managing secure, multi-tenant container and function-based services. It enables to deploy workloads in lightweight VMs (called micro-VMs) which provide enhanced security and workload isolation over traditional VMs, while enabling the speed and resource efficiency of containers. This upcoming integration of OpenNebula 5.12 with Firecracker builds a next-generation platform for on-premises serverless computing.

Some of the benefits that this integration is going to provide:

  • Seamless integration with container marketplaces like Docker Hub.
  • Direct execution of Docker images on micro-VMs and composition of containers with auto-scaling.
  • Multi-tenant, self-service cloud provisioning, including virtual data centers, data center federation and hybrid cloud computing.
  • Mixed hypervisor environments with KVM and VMware.

By reducing the overhead gap between VMs and containers, micro-VMs provide users with the boot speed and light weight of containers plus the security of a Virtual Machine. On top of that, Firecracker micro-VMs are further isolated with common Linux user-space security barriers (using methods like chroot, seccomp or cgroups) by an auxiliary tool called jailer. This second line of defense isolates the process inside the hypervisor in case the virtualization barrier is ever compromised.

By integrating Firecracker as a new hypervisor in the upcoming OpenNebula 5.12, we are not only incorporating an easy and secure solution for managing serverless workloads in private or hybrid clouds. To ensure that OpenNebula users are able to get the most out of this development, we’re also integrating Docker Hub as a new way for the user to retrieve images, making it very easy to deploy any image available at Docker Hub as a Firecracker micro-VM inside OpenNebula! 🚀

So far⁠—as you can see on the screencast above⁠—our Engineering Team has been successfully testing these new features on OpenNebula 5.10, so we can officially confirm now that Firecracker micro-VMs will be fully integrated and available to our users with the release of OpenNebula 5.12:


The networking subsystem will be fully integrated with Firecracker micro-VMs. This will allow micro-VMs to use any of the Virtual Networks available at your OpenNebula instances including all the protocols (based on Linux Bridging) already supported by OpenNebula, such as 802.1Q and VXLAN. This will make it very easy to start deploying new micro-VMs using existing networks and make them interact with already deployed applications based on current Virtual Machines or LXD system containers.


Contextualization is also fully supported by micro-VMs, allowing the user to easily deploy a micro-VM with all the configuration needed to be immediately functional without any manual intervention. This includes networking configuration or bootstrapping the serverless functions. As with other supported technologies (i.e. KVM, LXD and VMware vCenter), contextualization packages have to be installed in advance inside the image for contextualization to work properly. This won’t be necessary when retrieving images from Docker Hub, as the installation of contextualization packages has been included as part of the image build procedure, which is based on Dockerfiles.

VM Access

Apart from networking access, OpenNebula also provides VNC access to both VMs and containers. VNC access is also supported for micro-VMs. This will make it much easier to debug tasks running on micro-VMs, as the user will always have a channel to communicate with the VM even if networking is not operating properly or a SSH server is not available.

More to come…

This post is the first introduction to the world of the integration of Firecracker as a new officially supported virtualization technology within the upcoming OpenNebula 5.12. This project opens up a whole new set of possibilities and, as such, we’ll be working hard from now on, along with the OpenNebula Community, to bring to you interesting guides and use cases. In the meantime, you can have a look at our new Firecracker datasheet and, as usual, please don’t hesitate to send us your feedback, we’d love to hear how you are planning to use this amazing new features! 🤓

March 2020 – OpenNebula Newsletter

Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.


In mid-March, our Engineering Team (whose members, along with the rest of the OpenNebula Team, have been working from home for three weeks already) announced Hotfix Release 5.10.3. While OpenNebula is fully open source, packages from hotfix incremental versions are not publicly released, and are only available for users with an active support subscription. This version fixes several issues and includes a number of new features that have been backported.

One of the novelties that the forthcoming OpenNebula v.5.12 will bring along, after several months of intense work, is a fully redesigned OneFlow. This powerful component, whose new version comes with an incredible improvement in performance, allows OpenNebula users to run complex services in their private cloud, now quicker than ever!

In the meantime, if what you want is to set up a simple OpenNebula edge cloud environment for evaluation purposes, you can always refer to this new step-by-step technical guide on MiniONE recently published by bare-metal provider Packet (now part of Equinix). This one follows a first technical guide published last month.


Kudos to Mara Sorella, Postdoctoral Researcher at the Research Center of Cyber Intelligence and Information Security (Sapienza University of Rome), for inaugurating the new Cybersecurity section of our blog with a guest post on “Building Emulation Environments for Cybersecurity Analyses“, which is based on the paper she presented a few months ago in Barcelona at the OpenNebulaConf 2019. Amazing stuff!

Many thanks also to Hitesh Jethva, Technical Writer at Alibaba Cloud, for publishing in early March this great tutorial on how to install and configure OpenNebula on an Alibaba Cloud Elastic Compute Service (ECS) Ubuntu 18.04 server.


One of the many public events that have been cancelled in recent weeks due to COVID-19 is the Red Hat Summit 2020, which we were going to attend as sponsors. The event will run now on April 28-29 as a Virtual Experience. Not quite the same, but still better than nothing…

And in line with the #StayHome movement, in the coming days we’ll be publishing the details of our new Webinar series, to make sure no-one misses any of the action in a year that is becoming a turning point for the OpenNebula project. Stay tuned! 🚀

PS: Take care of yourselves and of those around you. All our support to those who are fighting the pandemic on the front lines! 💪

DevSecOps! Rancher Labs Virtual Rodeo 26/5 9:00h Virtual Cloudadmins Barcelona Meetup

Rancher a la carga!


Rancher Labs on Twitter: "Check out the July UK Rancher Rodeo with ...
Retos como la integración continua (CI) y la entrega continua (CD) de aplicaciones de la mano de tecnologías de containers como Docker y plataformas como Kubernetes (K8S) mediante Rancher 2 bajo servicios de infraestructura como Amazon Web Services (AWS).

Apúntate al primer HandsOn virtual en Castellano. Un Super Rodeo de la mano de Ranchers Lab y su DevOps Lead. (@Rawmind)

El LAB se desplegará sobre AWS en un entorno de labsondemand innovador gracias al patrocinio del propio Rancher Labs.

Caso de uso disponible también en el libro “Devops y Seguridad Cloud” de la Editorial UOC. (

Nos vemos el próximo miércoles 26 de Mayo a las 9:00h en el 2 Meetup de Cloud Admins Barcelona.


Reserva tu asiento!

A Sneak Peek into OneFlow’s New Performance Improvements

After months of work, we have finally finished the OneFlow revamp and internal logic redesign! 🎉OneFlow is one of the most powerful components in OpenNebula, as it allows our users to run complex services in their private cloud.

We have focused on improving the internal logic while leaving the OneFlow API unchanged and providing the same functionalities it had in the past. In this way, those users that have created their own applications using this API do not need to apply any changes 😉

The most important benefit that this revamp brings to the table is the global improvement in performance that we obtain by reducing the time that each individual operation takes. We have achieved this mainly by using OpenNebula’s new hook event manager system. Instead of constantly checking the state of each virtual machine, we take the messages that are published in the events queue and use them to decide which operation to execute.

If we have, for instance, a service implementing a straight strategy, we can subscribe to the queue monitoring changes in virtual machine states:

EVENT STATE VM/#{state}/#{lcm_state}/#{vm_id}

We will get a message every time this virtual machine changes its state, so in applications that require multiple VMs to implement a specific workflow, we can identify very quickly when a parent process is running in order to deploy at that point the child virtual machines.

If we have a service with straight strategy and a wait gate to report ready, we can subscribe to the queue for virtual machine update operations:

EVENT API one.vm.update 1

In this case, we will get a message every time the virtual machine is updated, so when OneGate—the service that allows virtual machine guests to pull and push VM information from OpenNebula—updates it and writes READY=”YES”, we can immediately deploy the child virtual machines.

Apart from these two changes, we have also undertaken a full redesign of OneFlow’s code, to make it more readable and better implemented. We have also implemented a better error treatment system, so now most of the errors are shown in both the CLI and the GUI (Sunstone). Again, all this without modifying the API, don’t worry!

Time to check those performance improvements! For comparative purposes, we have carried out some benchmarks using the OneFlow component from Hotfix version 5.10.2—released on February 12, 2020—and from the new version of OneFlow that will be part of the forthcoming Minor release 5.12.

For these tests, we have used the following service template:

This is a very simple template with just two roles, using the Alpine Linux 3.8 appliance from OpenNebula’s public Marketplace for the two VMs that we’ll be deploying.

We have executed some of the most performance sensitive operations (i.e. deploy, scale and warning) through both versions of OneFlow, and here you have the results:

OpenNebula 5.10.2

Deploy (~1 minute)

11:15:26 [I]: [SER] New state: DEPLOYING
11:15:26 [I]: [ROL] Role Master new state: DEPLOYING
11:15:57 [I]: [ROL] Role Master new state: RUNNING
11:15:57 [I]: [ROL] Role Slave new state: DEPLOYING
11:16:27 [I]: [ROL] Role Slave new state: RUNNING
11:16:27 [I]: [SER] New state: RUNNING

Scale (~1 minute 20 seconds)

11:24:10 [I]: [ROL] Role Master scaling down from 2 to 1 nodes
11:24:10 [I]: [ROL] Role Master new state: SCALING
11:24:10 [I]: [SER] New state: SCALING
11:25:03 [I]: [ROL] Role Master new state: COOLDOWN
11:25:03 [I]: [SER] New state: COOLDOWN
11:25:33 [I]: [ROL] Role Master new state: RUNNING
11:25:33 [I]: [SER] New state: RUNNING

Note: cooldown takes 10 seconds, so real time is 1 minute and 10 seconds.

Warning (23 seconds)

12:30:18 [Z0][DiM][D]: Powering off VM 1
12:30:41 [I]: [ROL] Role Slave new state: WARNING
12:30:41 [I]: [SER] New state: WARNING

OpenNebula 5.12.0

Deploy (11 seconds)

11:47:09 [I]: [SER] New state: DEPLOYING
11:47:09 [I]: [ROL] Role Master new state: DEPLOYING
11:47:15 [I]: [ROL] Role Master new state: RUNNING
11:47:15 [I]: [ROL] Role Slave new state: DEPLOYING
11:47:20 [I]: [ROL] Role Slave new state: RUNNING
11:47:20 [I]: [SER] New state: RUNNING

Scale (15 seconds)

12:00:48 [I]: [ROL] Role Master scaling up from 1 to 2 nodes
12:00:48 [I]: [ROL] Role Master new state: SCALING
12:00:48 [I]: [SER] New state: SCALING
12:00:53 [I]: [SER] New state: COOLDOWN
12:00:53 [I]: [ROL] Role Master new state: COOLDOWN
12:01:03 [I]: [ROL] Role Master new state: RUNNING
12:01:03 [I]: [SER] New state: RUNNING

Note: cooldown takes 10 seconds, so real time is 5 seconds.

Warning (1 second)

12:07:01 [Z0][DiM][D]: Powering off VM 10
12:07:02 [I]: [SER] New state: WARNING
12:07:02 [I]: [ROL] Role Master new state: WARNING

As you can see, the total time has been incredibly reduced! Again, this test has been done with just two virtual machines and two roles… so imagine that you have a service with more roles and more dependencies: you will get an even better performance!

But this is not all the new OneFlow brings along! Apart from these performance improvements, we have made some other interesting changes:

  • As we announced a few weeks ago, we have introduced the ability to create virtual networks automatically.
  • You will be able to use your own custom attributes for each service, and they will be passed on to the VMs via the context section. This will be really useful for your own contextualization process!

Well, that’s all for now 🤓 Hope you’ll find these new features as cool as we do! Feedback, as usual, is always very welcome: feel free to use the comments section below or our Community Forum. Cheers!

Building Emulation Environments for Cybersecurity Analyses

The focus of our research group at Sapienza University of Rome is cybersecurity. One of our current tasks involves developing models and algorithms for threat modeling and network hardening in computer networks. In practice, we generate huge “attack graphs” representing all possible ways an attacker could move laterally in a network by compromising networked systems. In this post I will provide an introduction to our work and to the role OpenNebula plays in our approach to cybersecurity research.

One of the analysis we perform in computer networks involves tracking the impact the cyber-attacks have on the organization’s mission. We do this by linking attack paths on networked assets with the dependencies of such assets within the organization’s business processes. An application of such analyses is to propose optimal risk-aware mitigation actions to protect corporate networks.

One of the big problems we faced when designing this kind of analysis was to evaluate our algorithms on real (or, at least, realistic!) scenarios. Very few datasets of actual networks exist, and the relative networks are typically very small in scale, so providing unfortunately very limited information. Furthermore, the final aim of systems of this kind is handling actual live (and dynamic) environments that can be scanned to get real time information from the various machines, updating the attack graph accordingly.

A solution based on OpenNebula

To accomplish these tasks, we designed an infrastructure built on top of OpenNebula. This solution allowed us to automatically deploy a target networked environment (what we call a testbed) on a dedicated cluster via infrastructure-as-a-code abstractions. This environment supports data collection as well as various cyber experimentation tasks.

Let’s explore the capabilities of our system! Our testbed could be, for instance, a virtual version of an existing network that we want to reproduce. To do so, we could perform tasks such as OS and service detection, as well as identification of existing machines on the reference network, together with running applications and their vulnerabilities.

We can merge this information with the network layout to build a testbed specification. In our case, we produced a YAML file describing the topology and characteristics of our testbed. The detection/discovery step is, obviously, optional; one can always write a YAML file from scratch, describing a custom testbed—that’s why we won’t be looking into the specifics of the detection steps here.

To control the infrastructure we developed Cylab, an application that takes this specifications and implements them on an OpenNebula-based cluster. This infrastructure is designed in a way that allows us to collect relevant information, like for example machine metadata and network traffic.


What I’d like to do at this point is share with you some of the technical and design choices we took when implementing this project, primarily those concerning the virtual environment infrastructure we decided to use, as well as the ones on the storage components and the network layer.

Virtual environment infrastructure

Part of our project involved comparing the two main open source Virtual Infrastructure Managers (VIM) by market penetration: OpenStack and OpenNebula. We approach this comparative analysis taking into consideration a varied criteria, including internal organization, storage, networking, hypervisors and governance [1].

Among the factors behind our choice of OpenNebula over OpenStack there were a number of crucial aspects:

  • OpenStack is made of a ton of submodules, each of them being a subproject on its own, such as Heat (orchestrator), Swift (object storage support), Neutron (networking), Keystone (user management and authentication), etc… Each of them has a different maturity level, integration model and API.
  • OpenNebula, on the contrary, has a sort of monolithic core and a single endpoint, which is managed through a set of coherent APIs and a single user-friendly GUI (Sunstone).
  • OpenStack is controlled by a Foundation whose priorities are driven by vendors that also sell their enterprise-grade proprietary implementations of its subcomponents (in the form of ‘support’ extensions).
  • OpenNebula only releases a single, free, user-driven version of their product, and the whole project is managed by a single vendor backed by a community of developers.

After that initial choice, we can confirm now that OpenNebula is a simple product to set up and use, but at the same time it is robust and enterprise-ready.

Storage layer

Once we took the decision of using OpenNebula as our Virtual Infrastructure Manager, we moved on to the next challenge: how to perform one of the fundamental tasks of a virtualization platform, that is, to be able to access a VM images repository.

Typically, in OpenNebula, new VM templates—or appliances—are added to a single node either manually or by using its command-line interface (CLI) or the Sunstone GUI to download them from the project’s public Marketplace. Of course, a private Marketplace can also be set up to store corporate VM templates.

Appliances from a local Marketplace have to be accessible at any time in order to be instantiated on demand on any of the nodes that form our virtualized infrastructure. However, using a centralized repository would result in a high network load for every new VM spawn.

In our case we decided to use GlusterFS, the well-known open source distributed filesystem, to aggregate storage mount points (“bricks”) from a pool of trusted servers and set up in this way the storage infrastructure we needed for our Images Datastore.

In particular we chose GlusterFS’s replicated mode, in which exact copies of the data are maintained on all bricks. This fosters data locality at VM instantiation time, something similar to what can be achieved with alternative open source storage solutions such as Lustre and Ceph.

Networking layer

For the networking layer—in particular for communication across physical nodes—we chose Open vSwitch (OVS), a software implementation of a multilayer network switch. Every virtual interface belongs to a virtual network, which in turn sits on a specific bridge handled by OVS.

OVS maintains a MAC address database to keep track of the VM addresses in the various LANs, and processes incoming input frames to understand when to switch a packet to a local virtual interface or when to forward it to a physical interface for that packet to be delivered to other physical nodes. In the following figure, for instance, bridge br0 is aware of the tap0-eth0 virtual to physical mapping:


VLANs can be configured directly from OpenNebula, to sit on a specific bridge, and multiple bridges can also be used to separate VLANs logically. Furthermore, OVS comes also with a SPAN/RSPAN functionality that enables efficient data collection at the bridge level, which for us was quite handy.

Data collection

An important aspect of what we want to be able to do as part of our cybersecurity analysis is collecting traffic from a specific testbed we have deployed on our virtualized environment.

The VMs of a testbed are usually deployed on different physical hosts, and their virtual network interfaces sit on OVS bridges, with different physical nodes connected to each other through a switch. Of course, a particular VM can have multiple interfaces.

If we want to collect all network traffic in the testbed without missing the traffic between VMs deployed on the same host, that’s when OVS comes to rescue, as it can be configured to use Switched Port Analyzer (SPAN). SPAN allows to mirror the traffic from each of the network interfaces of interest, towards a specific designated output port (the SPAN port).

Typically, to set up a cybersecurity-oriented emulation environment like ours, the required physical infrastructure would consist of a number of physical servers, with one of them acting as the OpenNebula Front-end—this is the server running the OpenNebula daemon and the Sunstone GUI. OpenNebula comes with HA support, but in our case this was not required.

The rest of nodes in the environment just need to go through a simple configuration process, depending on the specific virtualization technology you are planning to use (KVM, LXD, etc.). If OpenNebula’s Image Datastore is managed via GlusterFS—like in our case—the physical servers should be connected through a switch to a dedicated storage network.

Deploying testbeds automatically with Cylab

Now that we’ve had a look at the bits and bytes, let’s talk about Cylab! CyLab—or “Cyber range laboratory”—is a software application we developed one year and a half ago (there was no official Terraform Provider for OpenNebula back then) to be able to use a YAML description file to carry out the automated deployment of a virtual testbed on our OpenNebula-based environment.

Our YAML description files tend to contain specific portions devoted to configuration of VLANs, virtual routers and firewalls. They also include information regarding the operating system (i.e. which specific VM template to instantiate), VM users, and the services that have to be installed on the machines, along with custom init/configuration scripts.

After using this method for the deployment of a testbed, this is how out list of virtual machines looks like:


CyLab‘s front-end is an Angular/Bootstrap web application that interacts with browser clients for testbed creation and manipulation. It talks to the back-end via REST API. CyLab‘s back-end is a Java Spring Boot application with DB (PostgreSQL) persistence.

The back-end communicates with OpenNebula using its native XML-RPC API, which is one of the core component of the project’s powerful modular approach. The back-end also integrates with an Ansible server for service installation on deployed VMs.


The applications are endless…

Such a versatile and robust environment provides some clear advantages, but the applications of an OpenNebula-based IaaS model like the one we’ve just described are really numerous, and not just for cybersecurity analysis. In our case, these were some of those immediate applications:

  • Cyber-range deployment for security training and testing: allows us to deploy custom scenarios to perform security training activities such as incident management training (detection, investigation, response) or general training for raising employees’ cybersecurity awareness.
  • Dataset generation: collecting data from testbeds (e.g., network traffic) and, more generally, conducting any analysis that is not related to performance (e.g. user behavior profiling).

As an example of the latter, for our paper on this subject [1] we generated a network traffic dataset of benign and malicious network traffic collected from a testbed made of 52 hosts deployed in our emulation environment. For that purpose, we deployed a number of software agents written in python on some of the VMs.

Those agents carry out a number of benign traffic simulation jobs (HTTP/HTTPS web browsing, SSH, Samba and SFTP) managed by a scheduler. The jobs capture different behavioral patterns. At the same time, we performed various cyber attacks following a diverse set of attack scenarios (e.g. Heartbleed, a RCE Attack on Drupal CMS, and a Ransomware Deployment) and collected all generated malicious traffic.

The final dataset has been processed to obtain feature-rich labeled attack flows and benign traffic, all of which we have released publicly. Such extensive and comprehensive datasets are quite useful for cybersecurity analyses, such as training IDS and IPS classifiers and other machine-learning tasks, as well as for deep packet inspection investigation and other related activities.

Hope you have enjoyed this article, and we hope our experience with OpenNebula and other open source technologies will be useful to colleagues working in the cybersecurity sector. For questions and clarifications, please leave a comment on this site and we’ll be happy to answer!

PS – Our solution to create an emulation environment based on OpenNebula for cybersecurity analysis was presented a few months ago in Barcelona (Spain) at the OpenNebulaConf 2019, and also in Bangalore (India) at the International Conference on Distributed Computing and Networking 2019. An article about our work has appeared previously on The Register.


[1] Building an Emulation Environment for Cyber Security Analyses of Complex Networked Systems. Florin Dragos Tanasache, Mara Sorella, Silvia Bonomi, Raniero Rapone and Davide Meacci (2019). IEEE International Conference on Distributed Computing and Networking 2019. arXiv version:

Hotfix Release v.5.10.3 is Available!

A hotfix release is a type of incremental release that fixes specific issues. While OpenNebula is fully open source, packages from hotfix incremental versions are not publicly released, and are only available for users with an active support subscription. However, rest assured that the code is publicly available in the GitHub repository, as are the templates to create packages for the different supported platforms.

The following new features has been backported to 5.10.3:

  • Increase number of wild VMs shown per page in Sunstone.
  • Add datastore for live migration in Sunstone.
  • Make some improvements in onehook CLI.
  • Support hot disk resize in vCenter.
  • Fix wrong usage data monitoring for CEPH.
  • Add button to enable or disable an input in Sunstone.
  • Add non interactive CLI user inputs.

The following issues have been solved in 5.10.3:

  • Fix NIC aliases when update VM template in Sunstone.
  • Fix VM scheduler requirements.
  • Fix clusters on Virtual Networks Templates in Sunstone.
  • Fix edit group dialog in Sunstone.
  • Fix NIC when update VM template in Sunstone.
  • Fix VNC window in Sunstone.
  • Fix errors on detaching VM disks.
  • Fix database encoding overwritten by onedb upgrade.
  • Added missing package dependency on libcurl on Debian/Ubuntu.
  • Obsoleted add-on packages.
  • Library include errors in econe tools and oneprovision.
  • Missing gems in install_gems groups.
  • Fix NIC parameters when update oneflow template.
  • Removed obstructing oneimage path validation.
  • Fix for metadata corruption when snapshotting an imported running VM in vCenter.
  • Fix the visibility of the RDP button in Sunstone.
  • Fix Address Ranges for Virtual Network templates in Sunstone.
  • Fix form behavior on oneflow templates in Sunstone.
  • Fix attach nic alias when using network mode auto.
  • Fix LXD CPU and RAM monitoring always being 0.
  • Fix retrieve input function in Sunstone.
  • Fix live migration in Sunstone.
  • Fix wrong rror handling in CLI.
  • Fix vCenter context data not refreshed NIC (alias) detach.

Everyone can create their own packages or build OpenNebula from the source code, but only OpenNebula Systems customers have the convenience of pre-created packages for hotfix incremental releases. If you are an OpenNebula Systems customer with an active support subscription, you have immediate access to this hotfix packages. Please check your private repository at

Relevant Links

February 2020 – OpenNebula Newsletter

Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.


In mid-February, our Engineering Team announced Hotfix Release 5.10.2. While OpenNebula is fully open source, packages from hotfix incremental versions are not publicly released, and are only available for users with an active support subscription. This version fixes several issues and includes a number of new features that have been backported.

A new enterprise extension has also been made available this month: OneScape. The main objective of this brand-new component of OpenNebula is to simplify the maintenance, management and upgrade workflows for corporate users. It’s our way to say thank you to all those organizations supporting OpenNebula by purchasing one of our support subscriptions.

But wait, that’s not all! A few days ago we officially announced the release of OpenNebula’s new Microsoft Azure Driver. OpenNebula supports cloud bursting for Hybrid Cloud deployments on Azure, AWS, and third-party OpenNebula instances. Now, following Microsoft’s recommendation, we’ve updated the Azure driver in OpenNebula to make it compatible with the new Azure Resource Manager. This new feature will be released as part of the forthcoming OpenNebula v.5.12. Hurrah!

And if what you need is to set up a simple, all-in-one OpenNebula environment—maybe for development purposes or as a base for a larger temporary deployment—here you have a step-by-step technical guide on MiniONE recently published by bare-metal provider Packet (now part of Equinix). This new resource is the perfect complement to the whole new set of product datasheets and technical white papers that we’ve just published on-line 😉


Great news coming from the edge of the cloud universe! Yesterday, just in time to make it to this very same newsletter, we announced a new Edge Computing User Group. We are looking for users and members of the Community willing to collaborate with us on expanding our ONEedge initiative and building robust Edge Computing capabilities into OpenNebula. Join us on this exciting journey!

And speaking about members of the Community… kudos to José Alcántara and David Trigo! Next week they will be presenting OpenNebula and offering a practical workshop for participants at the HPC AdminTech summit in Seville (Spain). If you had been looking for an excuse to visit Andalusia, enjoy the local food and culture, and have a great time talking about open source cloud, HPC, Artificial Intelligence and Deep Learning… well, this is it! 😛


This month we have also announced—finally!—the preliminary 2020 schedule for our TechDays. If everything goes as expected (and COVID-19 doesn’t end up disrupting every single business trip or tech event on the planet) we’ll be meeting over the next months in several locations in Austria, Bulgaria, France, Spain and the United States. Registrations for our first Cloud TechDay—Sofia (Bulgaria)—which will be part of StorPool’s European Cloud Infrastructure Day on May 14, are already open!

In the meantime, we also keep working on our OpenNebula Conference 2020, which as you know will take place at the beautiful Tangla Hotel in Brussels (Belgium) on October 1-2. Earlier this month we launched the Call for Sponsors, with details on how to support our main annual Community event through one of the available sponsorship packages we’ve designed for this edition.

February has also given us a chance to participate at some amazing events organized by the global open source community. Apart from attending both the SustainOSS Summit and CHAOSScon EU, the OpenNebula team has presented some of the project’s main novelties at FOSDEM (Brussels), the iconic Free and Open Source Software Developers’ European Meeting. We’ve taken part at the Virtualization & IaaS DevRoom with a talk on “Edge Clouds with OpenNebula” that went actually pretty well!

And if you happen to be in, or near, Madrid (Spain) on March 19, maybe you’d like to join us for the Hosting & Cloud Transformation Summit that we co-sponsor and where we’ll be showcasing our use case for the gaming industry and speaking about data center virtualization, hybrid cloud, and edge computing. Hurry up and register ASAP before tickets are gone!

PS: Remember that the OpenNebula project is growing and we are hiring

ISACA BARCELONA : Jornada de Formació: “Taula rodona: ¿Devops i Seguretat?”





El pròxim dimarts, 3 de març, serà una jornada molt interessant. La ponència de formació contínua serà sobre:

“Taula rodona: Devops i Seguretat?”


Reptes com la integració contínua (CI) i el lliurament contínua (CD) d’aplicacions de la mà de tecnologies de tipus contenidors com Docker i plataformes com Kubernetes (K8S), sota serveis d’infraestructura com Amazon Web Services (AWS), o OpenNebula/OpenStack, eines d’automatització com Terraform i de testing com Jenkins …

Però no tot va d’eines i tecnologia, què passa amb les metodologies? ¿Àgils o no tan àgils? Altres modalitats de gestió? ¿I respecte a la seguretat?.

Algunes d’aquestes preguntes i d’altres d’interès dels assistents, seran tractades a la sessió al costat dels autors del llibre “Devops y Seguridad Cloud”.

Comptarem amb la presència de:


  • Jordi Guijarro (Fundació i2CAT)

  • Joan Caparrós (Consorci de Serveis Universitaris de Catalunya)

  • Lorenzo Cubero (Netcentric)


Dimarts 3 març 2020

Ubicació: IL3. Institut de Formació Contínua

Carrer Ciutat de Granada, 131



Es prega confirmació abans del dia 2 de març 2020

Nota: Les hores corresponents a la conferència seran reconegudes com a hores de formació a efectes de la política d’educació contínua del CISA/CISM/CGEIT/CRISC.

Així mateix, s’han establert els següents preus per l’assistència:

Socis de ISACA Barcelona. Entrada gratuïta
Convenis amb ISACA Barcelona: Places limitades
No Socis: 50€. Fins aviat,