Expanding OpenNebula’s multi-VM management component with more network features

We are working these weeks on improving one of OpenNebula’s most powerful components: OneFlow. OneFlow can be used to create, control and monitor complex services composed of groups of interconnected Virtual Machines that come with internal deployment dependencies. Each group of VMs is deployed and managed as a single entity, and is completely integrated with the advanced OpenNebula user and group management service. Once deployed, OpenNebula manages the elasticity rules defined by the users, making sure the different elements of a specific service react properly in response to, for instance, a scheduled allocation of additional resources or a sudden increase of workload.

In this post I’d like to share with you some of the new network features that the OpenNebula team is currently developing. These improvements are intended to be released as part of the next minor release of OpenNebula, version 5.12, so this is just an early taste of what’s to come in just a few weeks’ time 😉

Let’s start with an example: now, when a service template is created, OpenNebula lets you define as well a set of networks for that service. You can define, for instance, a Private and a Public network, which will be available to all the VMs that are part of that service. This is how you do it with the current version:

Pretty straightforward, right? When the service template is actually instantiated, the user can select then from the network pool which available network is going to be the “Public” one, and which one is going to be the “Private” one.

What we are doing now is implementing some more additional options in order to get the most out of the methods for self-provisioning virtual networks that were recently incorporated into OpenNebula. With the improved version of OneFlow we are working on at the moment, users will be able to choose whether they want to

  • Use an existing network
  • Create a reservation from an existing network
  • Create a network from an existing virtual network template

You can see below how the prototype for that new interface looks like. You’ll be able to use it to select how Public and Private networks are defined at the instantiate view. Reservations and virtual networks will be automatically deleted as soon as the associated service terminates. This will allow users to easily create dedicated networks for each service instance which will be fully managed by OpenNebula.

In the end, the main objective of this improved version of OneFlow is to increase the flexibility of OpenNebula, allowing cloud administrators to easily cover all end-user’s use cases. Still, remember that this is a work in process, so feedback is more than welcome! 🙂

Ricardo Diaz

Cloud Developer

Sergio Betanzos

Cloud Developer

Primer Meetup Cloud Admins Barcelona (23/1 18:30 FNAC L’Illa Diagonal)

 

BANNER DEVOPS Y SEGURIDAD CLOUD

¿Trabajar en Devops? https://www.meetup.com/es-ES/Cloud-Admins-Barcelona/events/267701751/

Retos como la integración continua (CI) y la entrega continua (CD) de aplicaciones de la mano de tecnologías de containers como Docker y plataformas como Kubernetes (K8S), bajo servicios de infraestructura como Amazon Web Services (AWS), o OpenNebula, herramientas de automatización como Terraform y de testing como Jenkins…

Pero no todo va de herramientas y tecnología, ¿Qué pasa con las metodologías? ¿Ágiles o no tan ágiles? ¿Otras modalidades de gestión? ¿Y respecto a la seguridad?

Algunas de estas preguntas y esperamos que alguna que plantees, serán tratadas en la presentación del libro “Devops y Seguridad Cloud” junto a sus autores y invitados como el Dr. Josep Jorba, Director del posgrado de Cloud Computing de la UOC y el Dr. Remo Suppi profesor de la UAB y colaborador en la UOC.

Nos vemos el próximo 23 de Enero a las 18:30 en el 1r Meetup de Cloud Admins Barcelona. Reserva tu agenda ->  https://www.meetup.com/es-ES/Cloud-Admins-Barcelona/events/267701751/

Cloud Admins Barcelona Team

OpenNebula 2019: Year in Review

Another year comes to an end, and as we look back on 2019 it becomes clear that the OpenNebula project is in the middle of an exciting turning point. This year has brought important novelties from a product development perspective. OpenNebula incorporates now promising integrations and new functionalities as a result of the joint efforts of developers, users and partners.

Thanks to a vibrant Community and a committed Partner Ecosystem, OpenNebula has consolidated its position as a leading open source Cloud Management Platform. This scenario has opened up new opportunities, such as the development of our open source approach to Edge Computing, providing powerful tools for companies to easily deploy low-latency services in close proximity to users, machines, and sources of data.

 

Expanding OpenNebula’s technological frontiers

The OpenNebula development team has been quite busy this year. 2019 has witnessed the release of two stable versions5.8 “Edge” in February, and 5.10 “Boomerang” in November—along with three hotfix versions and three maintenance releases (including the release of version 5.10.1 a couple of weeks ago). Installation of OpenNebula has never been so easy, with the installable packages including now all the dependencies and thus making the installation process smoother.

Several of the improvements that users have enjoyed this year had to do with OpenNebula’s core, including new support for NUMA and CPU pinning. The Hook Subsystem has been redesigned and is now much more flexible thanks to a new event queue that facilitates integrations with third-party applications. We’ve also enhanced the OpenNebula scheduler with automatic NIC selection, which alleviates the burden of VM/container template management in edge environments with heterogeneous network configurations.

In 2019 we’ve taken a crucial step forward, introducing support for infrastructure containers through LXD, the powerful Linux system container manager, which enables low resource container orchestration. We’ve improved the support for mixed storage modes, making OpenNebula more flexible and easier to integrate with third-party storage solutions, and we’ve also added DPDK support to the Open vSwitch drivers, which dramatically increases performance in network-hungry, densely packed VMs.

OpenNebula users with underlying VMware infrastructure have also enjoyed some specific novelties in 2019, including the new migration—both cold and live—of VMs between different vCenter clusters and/or different datastores. The new automatic image conversion between VMDK and QCOW2 facilitates the transition from VMware infrastructure to KVM (and vice versa). The recently-added NSX integration also brings some significant improvements for creating and consuming NSX networks from within OpenNebula, making network virtualization much simpler. vOneCloud, the open source replacement for VMware vCloud that we provide in the form of a virtual CentOS appliance for vSphere, will see a new release coming out in a matter of weeks.

Finally, OpenNebula’s user interfaces, which have also seen a series of enhancements, starting with the improved output for the command line interface (CLI). Sunstone, OpenNebula’s GUI, will go through a major redesign next year as part of the release of OpenNebula 6.0. In the meantime, an extra layer of security is now in place, with the recent addition of Two-Factor Authentication to the login process. The new Distributed Data Centers feature, on the other hand, provides now an easy way to build and grow your cloud on bare-metal cloud providers, a key element for implementing an edge cloud architecture.

MiniONE, our simple deployment tool for installing a single-node KVM or LXD cloud, has also gone through some changes in 2019. It is now easier than ever to set up your own OpenNebula evaluation environment inside a virtual machine or physical host in just a few minutes. Recent versions of miniONE come with an option that enables you to try OpenNebula’s new Edge Computing functionality, which includes the OpenNebula frontend and KVM hypervisors already pre-configured and ready to be deployed on public bare-metal providers (i.e. Packet).

 

To boldly go where no one has gone before…

OpenNebula’s vision and firm commitment with open source and Edge Computing has been recognized in 2019 by the European Innovation Council. A few months ago, OpenNebula Systems was awarded €2.1M from the EU Horizon 2020 SME Instrument Program to assist in the development and productization of ONEedge, an innovative platform that brings the private cloud to the edge through cloud disaggregation.

ONEedge takes advantage of bare-metal offerings from public cloud providers to naturally evolve a private edge cloud, providing companies with an automated software-defined platform to build private Edge Computing environments based on highly-dispersed edge nodes. A prototype of this new product has already been successfully tested to launch a distributed gaming cloud, an AWS IoT Greengrass edge environment, and a pilot platform by Telefónica for the real-time self-provisioning of telco services. So buckle up, because the project is about to engage warp speed! 🚀

In preparation for this exciting voyage to the outer reaches of the cloud universe, OpenNebula is reinforcing its crew and warming up the impulse engines of its new starships. In November 2019, OpenNebula Systems moved to its new HQ office at La Finca Business Park in Madrid, the innovation hub hosting the main facilities of well-known technological companies such as Microsoft, Accenture, Orange and Veritas Technologies. This move quadruplicates the size of our premises in Spain. Along with this change comes the planned expansion in 2020 of the OpenNebula Labs in Brno (Czech Republic)—which will double in space—and also of the OpenNebula’s office in Boston, Massachusetts.

All this comes as a result of the enlargement of the OpenNebula team, which is now hiring and incorporating new profiles to the project, including the recent landing of a new Open Source Community Manager at the Madrid HQ. Our global team is growing, and we are now in the search for cloud systems developers, cloud systems engineers, full stack developers, a digital marketing specialist and a cloud technical evangelist. All candidates must be willing to join us in this adventure to explore strange new worlds and seek out new life beyond the current frontiers of open source cloud and Edge Computing.

 

 

An open source project that revolves around its community

The fate of any truly open source project is intrinsically tied to the strength and enthusiasm of its community of developers, users and partners. With a solid team of core developers and a growing number of users, contributors, translators, project ambassadors and third party add-ons, the OpenNebula project stands as a living example of the viability and potentialities of a fully open source model.

Openness for us means that you can run production-ready software that is fully open source without proprietary extensions that lock you in. OpenNebula is not an ‘open core’ solution or a limited version of an enterprise software. There is one and only one OpenNebula distribution, and it is Apache-licensed and enterprise-ready. After you’ve decided to use the product, you can then choose between community or professional support. As simple as that.

Now that we are fast approaching the end of 2019, we can confirm that this year has also been a turning point from a Community perspective, with a number of amazing developments and new joint projects coming up. We’ve seen OpenNebula Kubernetes being officially listed as a Managed Production Environment Solution, Iguane Solutions’ Terraform Provider for OpenNebula being officially approved by HashiCorp, and new storage drivers being developed for LINSTOR and HPE 3PAR.

The successful integration of OpenNebula with LXD and Docker has opened up a number of interesting possibilities, which have been expanded in 2019 by the contributions that several community members have made to the project, including a fully automated Helm chart to deploy an OpenNebula control-plane on Kubernetes and a recent integration of OpenNebula with ElastiCluster.

Other exciting initiatives that have taken place this year include OpenNebula’s participation in Packet’s Edge Access Program—a scheme to accelerate open source and commercial use cases by providing access to edge infrastructure, technology partnerships and expertise—and the announcement of OpenNebula’s membership in Vapor IO’s Kinetic Edge Alliance—a network that brings together the leading companies in Edge Computing to solve the key challenges at every layer of the stack, and create solutions for essential edge use cases.

And finally, the icing on the cake has come this year in the form of an article about OpenNebula published by Linux Journal as part of their “FOSS Project Spotlight” series, plus a second piece published on VMware’s Cloud Community blog as part of their “Exploring Ecosystem Partners for VMware Cloud on AWS” series—not bad for a project that has come all this way with virtually no PR budget… 😉 Of course, in terms of spreading the word out there, our Community Champions deserve much of the merit! Next year we’ll review our Champion Program to make sure that we are giving them all the support they need.

 

Open source is social

What makes an open source project a living organism is not only the code, but also the people that develop, use, translate and promote the shared values and vision behind a specific piece of software. That is why we at OpenNebula will keep investing time, resources and energy in providing the members of our Community with enough chances throughout the year to meet, get to know each other, talk about open source, cloud technologies and Edge Computing, find synergies and learn from each other.

On October 21 and 22, we celebrated the 8th Annual OpenNebula Conference in sunny Barcelona (Spain), where we welcomed a technically-minded group of participants. Along with OpenNebula core developers, the attendees enjoyed a number of tutorials, hacking sessions, and workshops full of technical gems, including some animated discussions about the upcoming features of OpenNebula, best practices to operate a cloud, and how to deploy container-based solutions with OpenNebula. Many thanks to LINBIT, StorPool and NTS for sponsoring this year’s edition of our OpenNebulaConf!

And here you have an early announcement: the OpenNebulaConf 2020 will take place in Brussels (Belgium), from September 30 to October 2. We’ll be spending a few days at the heart of Europe, presenting our vision on open source cloud and Edge Computing, and enjoying the special allure of the Belgian capital. And all that without clashing with the Open Networking & Edge Summit Europe! 😎

But what if you can’t make it to our annual conference? Don’t worry! The OpenNebula TechDays come to the rescue. Running since 2014, our TechDays are educational and networking public events to learn about OpenNebula, Edge Computing and open source cloud. Co-organized by our partners and local user groups, they provide a chance to meet OpenNebula core developers and other members of the Community for a one-day, hands-on workshop on cloud installation and operation. We’ve celebrated four TechDays in 2019:

  • Barcelona (co-organized by CSUC)
  • Sofia (co-organized by StorPool)
  • Frankfurt (co-organized by Interactive Network and EuroCloud Germany)
  • Vienna (co-organized by NTS)

Interested in hosting an OpenNebula TechDay? The Call for Hosts for 2020 is still open, the deadline for submitting a proposal being January 10. We’ll be announcing next year’s TechDays shortly after that date. In the meantime, we look forward to hearing from you!

After a year in which OpenNebula has been present at major conferences on virtualization and cloud computing such as VMworld US (San Francisco) and VMworld EU (Barcelona), we are now working on a new and exciting calendar of events for 2020. As always, we look forward to hearing your feedback, answering questions, meeting amazing people, and showcasing the features of our latest releases. We’ll publish our future commitments and sponsorships very soon!

So stay tuned because 2020 will come full of wonderful things. Thank you very much for your support in 2019, Happy New Year, and live long and prosper! 🖖

The OpenNebula Team

 

December 2019 – OpenNebula Newsletter

Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.

Technology

Earlier this month, OpenNebula’s development team published the first Maintenance Release of stable branch 5.10 ‘Boomerang’: OpenNebula 5.10.1 comes with a series of bug fixes and minor improvements. Check the release announcement for more details and access to a complete list of new features.

Despite all the hard work that’s involved in a new release, our indefatigable developers also had time and energy to contribute, as usual, to the project’s blog. This month’s posts covered a series of tips for selecting the right CPU model on KVM x86 hosts and a step-by-step tutorial on how to create an OpenNebula appliance for Gaming Servers.


Community

This month we’ve opened the Call for Hosts for the OpenNebula TechDays 2020. Our TechDays are educational and networking public events to learn about OpenNebula, edge computing and open source cloud. Think about hosting our technical experts from OpenNebula Systems and members of the OpenNebula Community for an open, one-day workshop on cloud installation and operation—deadline for this call is January 10, 2020.

Special thanks to OpenNebula Champion Daniel Dehennin (EOLE) for giving a presentation about OpenNebula at the JRES 2019 in Dijon (France). You can access his talk—OpenNebula: l’informatique élastique 100% open source—through this link. Merci beaucoup, Daniel!


Outreach

We are very happy to confirm that, once again, the OpenNebula team will be present at FOSDEM (1-2 February, Brussels), the iconic Free and Open source Software Developers’ European Meeting. We’ll be taking part at the “Virtualization and IaaS” Developer Room with a talk about Edge Clouds with OpenNebula. Come and join us!

And that is not the last time we’ll be visiting Belgium next year! Here you have an early announcement: the OpenNebulaConf 2020 will be taking place in Brussels from September 30 to October 2. Save the date, because next edition comes with a number of interesting novelties and activities. More details to follow very soon, so stay tuned! 😀

PS: Remember that the OpenNebula project is growing and we are hiring. Any help to spread the word is more than welcome!

How to Create an OpenNebula Appliance for Gaming Servers

After installing a fresh instance of OpenNebula, the next logical step for some users might be the creation of a Virtual Machine (VM) image. The OpenNebula project provides many ready-to-use appliances as part of its public marketplace, but sometimes you need to produce your own image for a specific purpose. In this post I will guide you through the process of creating a customized VM image ready for production. Ready, steady, go! 🙂

Creating a base VM image

The main tool we’ll use for this is called libguestfs, which is defined in its own website as a set of “tools for accessing and modifying virtual machine disk images”. More importantly for our purposes, these tools can also be used to create VM images.

Let’s install them by using one of these commands, depending on your Linux distribution.

For Fedora/RHEL/CentOS:

sudo yum install libguestfs-tools

For Debian/Ubuntu:

sudo apt-get install libguestfs-tools

The first step is to double-check all the possible VM images that we can create with libguestfs. To do so, let’s execute the following command to have a look at the full list of all base images available to us (in this example we will be working only with CentOS 7.7):

virt-builder –l

If we were only interested at this stage in creating a simple CentOS VM image, we could do that by using the following command, and it would create a local file called centos-7.7.qcow2 containing our new image:

virt-builder centos-7.7 --format qcow2 

After this, a new CentOS VM image has been created but it’s not ready yet for OpenNebula. For that, we need to take a crucial step: to install the context packages inside the VM.

To make sure that the new VM comes with all the necessary packages already preinstalled, we can create a local file in our computer called script.sh, which will be executed inside our VM image at the time we generate it.

We need this script to contain the context that turns a generic CentOS VM image into an OpenNebula-ready CentOS VM image:

#!/bin/bash

curl -L https://github.com/OpenNebula/addon-context-linux/releases/download/v5.10.0/one-context-5.10.0-1.el7.noarch.rpm

yum install -y epel-release

yum install -y one-context-[0-9]*el7*rpm

rm -f one-context-5.10.0-1.el7.noarch.rpm

With this file ready, now we can run virt-builder and pass this script as argument. This will create a CentOS VM images with the contextualization packages already installed inside:

sudo virt-builder centos-7.7 --format qcow2 --run script.sh

Now we have a generic CentOS VM image ready for OpenNebula! 😀

Setting up a gaming server

But what to do if we don’t want that VM image to be so generic? 😉 Let me introduce you to the wonders of Xonotic: based on a GPLv3 license, and first released in 2011, this well-known video game came to life after their creators joined forces to “create the best possible fast-paced open-source first-person shooter game”.

To prepare a VM image with a preinstalled Xonotic server, we just need to include a few additional lines into script.sh, the file we’ve used in the previous section to add the OpenNebula context to our CentOS VM image:

# Download Xonotic

wget https://dl.xonotic.org/xonotic-0.8.2.zip

# Install unzip to decompress the downloaded file

yum install -y unzip

# Decompress the game

unzip xonotic-0.8.2.zip

# Remove the downloaded file

rm -f xonotic-0.8.2.zip

With these changes in the script, we’ll get the game folder inside our VM image. Now all we need to do is get the Xonotic server properly configured. For this, we’ll follow the steps for a “Dedicated Server” described at the project’s documentation and—surprise, surprise—add some lines to our beloved script.sh 🙂 The files needed to set up a Xonotic server come with the game, from the main game directory (in our case /Xonotic/server/):

# Copy game executable (for Linux operating systems)

cp Xonotic/server/server_linux.sh Xonotic/

# Copy server.cfg to user profile (root)

mkdir /root/.xonotic

mkdir /root/.xonotic/data

cp Xonotic/server/server.cfg /root/.xonotic/data/

# Move game directory to user directory (root)

mv Xonotic /root/

With this configuration, we’d always be able to login to our VM at any point and launch the server manually by executing /root/Xonotic/server_linux.sh But what if we want the server to run automatically when the VM starts? A piece of cake! We just need to set up a service to make this happen. Let’s create a simple file called xonotic.service with the following content:

[Unit]

Description=Xonotic Server Service

[Service]

ExecStart=/root/Xonotic/server_linux.sh

[Install]

WantedBy=default.target

With the option --copy-in file:destination, we can use virt-builder to copy xonotic.service into the /etc/systemd/system/ directory when generating the VM image. But before that, we just need to add some extra lines—this is the last time, I promise!—to script.sh:

# Give permissions

chmod 664 /etc/systemd/system/xonotic.service

# Reload systemctl to include this new service

systemctl daemon-reload

# Enable service (we don't start it)

systemctl enable xonotic.service

After this very last changes to the script, the command to create a CentOS-based OpenNebula appliance with a ready-to-use Xonotic game server would look like this:

sudo virt-builder centos-7.7 --format qcow2 --copy-in xonotic.service:/etc/systemd/system/ --run script.sh 

Voilà! Of course, we can add many more configuration options to the virt-builder command, but this basic configuration will get the job done.

Now, time for you to give it a try and start creating you own customized OpenNebula VM images.

Enjoy and keep us posted!

Ansible

[...]

Read More...

Docker

[...]

Read More...

Kubernetes

[...]

Read More...