Using Docker on OpenNebula through LXD

LXD system containers differ from application containers (i.e. those managed by Docker, RKT, or even by LXC) in the target workload. LXD containers are meant to provide a highly efficient alternative to a full virtual machine. They contain a full OS with its own init process and are designed to provide the same experience as a VM. Docker containers are meant to run a single application process inside a confined and well-defined runtime environment. However, both of them use a similar technology under the hood. LXD uses LXC and also Docker, since Docker moved from LXC to its own implementation in 2014. Essentially, the difference lies in the way the container orchestrator handles how the containers are run, and the interaction between them and the host they are running on.

Being able to provision both types of workloads certainly brings some real benefits to a cloud platform, and that is why OpenNebula incorporated support to LXD in version 5.8 “Edge”. OpenNebula can create LXD containers running on bare metal, but, in the case of Docker, the compatibility is done through Docker Machine (available at the project’s marketplace as a KVM virtual appliance). This puts a halt to the classic near bare metal performance of containers due to having to go through full virtualization in the KVM appliance. With LXD, and being capable of running Docker, you can avoid that loss.

If you want to learn more details about the interaction of Docker over LXD, you can read this post by Stéphane Graber, LXD technical lead at Canonical Ltd, and also have a look at LXD’s FAQ section. We at OpenNebula will leverage both container management systems in order to orchestrate apps: Docker (as a means of application distribution) and LXD (as a convenient execution environment, fast, isolated and directly integrated into the OpenNebula software stack). The app deployment will be managed by OpenNebula’s START_SCRIPT feature, available to contextualized images.

This solution provides 3 runtime levels:

  1. OpenNebula LXD node
  2. OpenNebula LXD container
  3. Docker App container

In this post we will deploy nginx with OpenNebula using a container from the LXD marketplace. An internet connection is required, for both Docker Hub and the LXD marketplace, as well as a LXD node.

1) Download ubuntu_bionic – LXD from the LXD marketplace

2) Update the VM template

  • Add a public network interface
  • Set LXD_SECURITY_NESTING to yes
  • Add a network with internet access
  • Add a START_SCRIPT

Keep in mind that you can create a dedicated template for running Docker containers in order to avoid installing docker.io every time you want to run an app. This will save a lot of time! 😉

3) Deploy the VM

Extend the image size before deploying (the default 1GB isn’t enough)..

4) Voilà!

After a while you’ll see a Docker container running nginx inside a LXD container:

Now, install lynx or your favorite browser and take a look at nginx (i.e. lynx 192.168.150.101)

You can run as many app containers as you want per LXD container, but remember: LXD needs a docker.io setup, thus adding an extra storage overhead, which tends to be more neglectable as the number of apps increases per LXD container increases.

Hope you have enjoyed this post, and don’t forget to leave us your comments below!

v.5.10 “Boomerang” Release Candidate is available

OpenNebula v.5.10 “Boomerang” is just about ready!  The Release Candidate is now available for download. We offer a big thanks to you, the User Community, for your dedicated attention to testing, as we have been able to include fixes for several bugs identified by you.

The OpenNebula team is plugged into “bug-fixing mode”. Please note that this is a RC release aimed at testers and developers to try the new features, and to send welcomed feedback for the final release. Also note that there is no migration path from the previous stable version (5.8.5) nor a migration path to the final stable version (5.10).

Take some time to check out the latest details about what you can expect from v.5.10 Boomerang. You can find them below. And don’t hesitate to download this version to get an early peek at the 5.10 Boomerang feature set.

Relevant Links

Ignacio M. Llorente

Executive Director at OpenNebula

Ruben S. Montero

Chief Architect at OpenNebula

Tino Vázquez

Engineering Manager at OpenNebula

Michael Abdou

Customer Success Manager at OpenNebula

Alberto P. Martí

Community Manager at OpenNebula

October 2019 – OpenNebula Newsletter

Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.

Technology

October was a busy month – one for which we have been waiting a long time. This month we cut a Beta release of OpenNebula 5.10Boomerang“, a version that will be bringing exciting new features across the platform, including NSX integration, a revamped Hooks Subsystem, and NUMA and CPU pinning, among others. Take a look at the Release Notes to get a more detailed look, and download it to give it a shot. We would love to get your feedback!

As the demand to try out ONEedge is growing, we have taken miniONE and extended its usability to include the easy installation of an Edge cloud using bare-metal resources from Packet. If you are looking to take advantage of edge resources close to your users and to reduce latency within your private cloud, take a look at the simple steps with miniONE to deploy your own Edge cloud.

We have also been actively working on VMware integration, in particular, to include support of VMRC, which is a substitute of VNC offered by vCenter. Keep an eye out for upcoming updates.

Community

On October 21 and 22, we celebrated the 8th Annual OpenNebula Conference, in Barcelona, Spain, where we welcomed a technically-minded group of attendees. During the first day, we held a Hands-on Administration tutorial, as well as a Hacking session, where attendees shared their experiences and solutions to common OpenNebula operational situations. Everything from networking and storage, to provisioning schemes were discussed. The second day was also full of technical gems. We covered GOCA Golang bindings of the OpenNebula API, and a couple of detailed use cases of OpenNebula at the network edge (CORD by Telefonica and gaming by Crytek). The rest of the morning talks showcased the flexibility and simplicity of OpenNebula to implement multiple solutions like virtual labs, cloud VDI, and nested virtualized infrastructure. The day ended with four workshops with animated discussions about the upcoming features of OpenNebula, best practices to operate a cloud, and how to deploy container-based solutions with OpenNebula.

You can review the slides, videos, and photos from the event as they have been published.


Earlier this month, we saw a new addition to the OpenNebula Add-on Catalog. A contribution from FeldHost, which allows OpenNebula to use an HPE 3PAR storage system for storing disk images. Take a look at the details provided on GitHub for this https://github.com/OpenNebula/addon-3par/ new add-on, and be sure to get involved wherever you can.

Outreach

VMworld 2019 EU will be held in Barcelona, Spain on November 4-7, 2019, and OpenNebula Systems will be stationed in the midst of the “Solutions Exchange” (Booth # E115), prepared and excited to showcase our latest release of vOneCloud – the open-source replacement for VMware vCloud. With our 5.10 Beta release available, we will have plenty to showcase and demo, so if you are attending, make sure to stop by our booth.

Stay connected!

OpenNebulaConf 2019 – Published Materials

This past October 21 and 22, we celebrated the Eighth Annual OpenNebula Conference in Barcelona, Spain. We were excited to welcome so many curious and enthusiastic attendees as we ran through a dynamic agenda including Hands-on Training, different break-out groups and Hacking sessions, a line up of Use-case reviews and Presentations, as well as separate Workshop tracks.

All of the materials from the event are available for public viewing:

We send out our thanks to our Sponsors (StorPool, LINBIT, and NTS) for supporting the event, to our great lineup of Speakers, and to all of the attendees for sharing your excitement and your feedback about OpenNebula.

Use miniONE at the Edge

In our previous posts, we introduced miniONE as a simple evaluation tool that allows you to deploy all-in-one installation for KVM or LXD deployments in a few simple steps. We have just released a new version of miniONE with a new option that enables you to try the new OpenNebula edge computing functionality using the bare-metal cloud infrastructure provider, Packet.

Frontend

You will need one host for the OpenNebula frontend. It could be a physical host in your rack, your own VM or even a VM running in the public cloud. Choose either the Centos/Redhat or Ubuntu/Debian family, just ensure it is a relatively fresh and updated system.

For this demo, we decided to run the frontend on the small AWS EC2 instance, so let’s start in the AWS console. For the operating system it could be Ubuntu Server 18.04 LTS,for the instance type we could choose the t2.micro or give it little more memory by choosing t2.small. The only option you need to modify is adding some more storage, let’s say 25GB. Optionally you may allow accessing HTTP ports (80/443) to be able to connect to the Sunstone easily.

miniONE

As soon as your instance is ready, connect to it and download the tool

# wget 'https://github.com/OpenNebula/minione/releases/latest/download/minione'

To start the edge deployment you will need to provide some details about the Packet resource. At least the packet API token and Packet project. Both can be easily found at https://packet.net

Then you should decide the location for your Edge deployment. For the first Packet node, we chose Sunnyvale, CA for which the code name is sjc1. Feel free to choose a different location – just be sure the required Packet plan is available there. In any case, miniONE will validate your parameters before it starts to deploy.

And finally you need to pick the size of the node – Packet calls this plan. We suggest keeping the t1.smalldefault value. This is what Packet calls cloud killer, but if you want a different plan, provide it using the --edge-packet-plan parameter.

Now you should be ready to run.

# bash minione --edge packet --edge-packet-facility sjc1 --edge-packet-token [token] --edge-packet-project [project]

For the first time you will perhaps encounter the following error

Checking jq is installed  FAILED
Install jq package first

miniONE would like to validate all the Packet parameters and to parse the json output jq command is needed. In our case we will install it using

# apt update
# apt-get install jq

Now we will run the deployment again. While miniONE is doing its job, you should receive a similar output

### Checks & detection
Checking AppArmor  SKIP will try to modify

### Main deployment steps:
Install OpenNebula frontend version 5.8
Install ONEProvision
Configure IPAM Packet, alias IP mapping driver, VM hooks
Trigger oneprovision
Export appliance and update VM template

Do you agree? [yes/no]:
yes

### Installation
Updating APT cache  OK
Configuring repositories  OK
Updating APT cache  OK
Installing OpenNebula packages  OK
Installing Ruby gems  OK
Installing opennebula-provision package   OK

### Configuration
Applying packet changes to oned.conf  OK
Configuring packet hooks in oned.conf  OK
Update ssh configs to accessing Packet hosts  OK
Switching OneGate endpoint in oned.conf  OK
Switching scheduler interval in oned.conf  OK
Setting initial password for current user and oneadmin  OK
Changing WebUI to listen on port 80  OK
Starting OpenNebula services  OK
Enabling OpenNebula services  OK
Add ssh key to oneadmin user  OK
Update ssh configs to allow VM addresses reusig  OK
Ensure own hostname is resolvable  OK
Checking OpenNebula is working  OK
Prepare packet template  OK
Checking packet template [/tmp/tmp.ukRnn6J8BE]  OK
Running oneprovision
2019-10-24 12:07:01 INFO  : Creating provision objects
WARNING: This operation can take tens of minutes. Please be patient.
2019-10-24 12:07:04 INFO  : Deploying
2019-10-24 12:10:40 INFO  : Monitoring hosts
2019-10-24 12:10:42 INFO  : Checking working SSH connection
2019-10-24 12:10:44 INFO  : Configuring hosts
ID: 794e1810-a9f4-4047-8601-b4aad4a7d086
OK
Exporting [Service WordPress - KVM] from Marketplace to local datastore  OK
Updating VM template  OK

and finally end up with a similar successful report

### Report
OpenNebula 5.8 was installed
Sunstone [the webui] is runninng on:
  http://172.31.87.69/
Use following to login:
  user: oneadmin
  password: xDV36pWwGe

### Packet provisioned
  ID NAME            CLUSTER   TVM      ALLOCATED_CPU      ALLOCATED_MEM STAT
   0 147.75.82.249   PacketClu   0                  -                  - init

Now when all is done just wait for a little until OpenNebula monitors the Packet host and you can see it on. Check using

# onehost list
  ID NAME            CLUSTER   TVM      ALLOCATED_CPU      ALLOCATED_MEM STAT  
   0 147.75.82.249   PacketClu   0       0 / 400 (0%)     0K / 7.8G (0%) on

Run the VM

At this point, you have a ready-to-use OpenNebula frontend connected with the KVM hypervisor on the Packet and you are ready to deploy your VMs. You can do it using the Sunstone frontend or just by typing

# onetemplate instantiate 0

For the demo purposes, we chose the default appliance to be our WordPress service app. So when your first VM is running, go to the public IP of the VM and you should get a bootstrapped WordPress webpage. To see the VM public IP address run

# onevm show 0 | grep ETH0_ALIAS0_IP=
ETH0_ALIAS0_IP="147.75.82.242",

However, you might choose a different appliance for your case. Go to our marketplace and pick one of the systems. Be aware that you need to put an exact name to the parameter --edge-marketapp-name. For instance CentOS 7 - KVM is a valid option.

miniONE gives you an option to extend the deployment by adding additional hypervisor nodes at the edge. To do so simply add an option --node to the deployment command and run it again. For the second node, we chose the location to be Amsterdam, which has ams1 code name.

# ./minione --edge packet --node --edge-packet-token [token] --edge-packet-project [project] --edge-packet-facility ams1

When the miniONE finishes, you should see 2 hosts now:

# onehost list
  ID NAME            CLUSTER   TVM      ALLOCATED_CPU      ALLOCATED_MEM STAT  
   1 147.75.82.195   PacketClu   0       0 / 400 (0%)     0K / 7.8G (0%) on    
   0 147.75.82.249   PacketClu   1    100 / 800 (12%)   768M / 7.8G (9%) on

Under the hood, miniONE uses the DDC tool called oneprovision, which also offers a command-line interface. To list the Packet resources, type

# oneprovision list
                                  ID NAME                      CLUSTERS HOSTS VNETS DATASTORES STAT           
642e1b04-d266-4ed6-abf4-906ca4a08898 PacketProvision-101              1     1     2          2 configured     
794e1810-a9f4-4047-8601-b4aad4a7d086 PacketProvision-100              1     1     2          2 configured    

and to see the details

# oneprovision show 642e1b04-d266-4ed6-abf4-906ca4a08898

PROVISION 642e1b04-d266-4ed6-abf4-906ca4a08898 INFORMATION                      
ID      : 642e1b04-d266-4ed6-abf4-906ca4a08898
NAME    : PacketProvision-101
STATUS  : configured

CLUSTERS
101

HOSTS
1

VNETS
3
2

DATASTORES
103
102

Pricing and cleanup

If you choose the smallest AWS instance for the frontend as we did you should fit withing the Free tier eligible (up to 750hrs). The price of the Packet t1.samllinstance is $0.07 per hour. So even if you play with the deployment for quite a while you could hardly exceed $1.

Before you finish the evaluation, don’t forget to clean up your AWS EC2 instance as well as the Packet host. To delete the Packet you can use again the oneprovision command

# oneprovision delete [id] --cleanup

miniONE is a tool to really help get users “up and running” with an OpenNebula environment. We hope you see how easy it is to start your own edge deployment or eventually extend your current setup to the edge.