OpenNebula at the Edge with our new AWS Greengrass Appliance

Recently we showcased the use of our newly-developed appliance and what can you achieve with it: Automatic Deployment of AWS IoT Greengrass at the Edge. The appliance was still in “pre-release” state but now we can finally announce our new AWS Greengrass service appliance in the OpenNebula Marketplace.

If you missed the aforementioned blogpost and you are interested in the Edge computing paradigm (especially with AWS IoT Greengrass) then we recommend you to look at it now. It demonstrates a real use-case of our appliance on a practical example and gives you an idea what the AWS Greengrass can do for you and just how the ONE service appliance can make it so much easier.

Take a look at the video screencast of the entire automated deployment.

To make the most out of our appliance, you can first familiarize yourself with AWS Greengrass by taking a look at the official “What is AWS IoT Greengrass?” tutorial. However, the real critical reading is located in the OpenNebula AWS IoT Greengrass documentation. It will become crystal clear just how simple this ONE service appliance will make your life.

Share your feedback! We’d love to hear about what you think.

OpenNebulaConf 2019 – The Agenda is Set!

The OpenNebulaConf 2019 Agenda is published and ready, including new Technical Workshops, and a bountiful coverage of “all things Cloud and Edge”.

The OpenNebulaConf 2019, which is being held this year in Barcelona, Spain on October 21-22, 2019, is approaching quickly. And this year we have a packed agenda with plenty to offer for everyone. This year’s version release of 5.8 Edge, included exciting elements like LXD container integration, and automated Distributed Data Center (DDC) functionality, allow us to showcase some innovative use case demonstration in the realm of Edge Computing, among other things. We are also inching closer to an OpenNebula 5.10 release, which will be only increasing the momentum toward cutting-edge capabilities.

In addition to including an updated version of our Hands-On Tutorial, and having the opportunity to listen to insights and lessons-learned from members in the User Community, we will be introducing some “Hacking Workshops”, as well as some other DevOps and Application-focused Workshops.

You won’t want to miss it. Check out the OpenNebulaConf agenda here.

Remember that “Early Bird Pricing” is still effective, so don’t wait to register.

We give a huge thank you to our Platinum Sponsors LINBIT and StorPool for your support and dedication in making this a “can’t-miss” event! For anyone who would like to join as a sponsor, there is still time. Review Sponsorship details here. Help join and foster the OpenNebula Community.

Looking forward to seeing you in Barcelona!

Automatic Deployment of AWS IoT Greengrass at the Edge

A new OpenNebula appliance, along with new innovative Edge features, will make deploying your AWS IoT Greengrass Edge environment Simple and Pain-free 

AWS IoT Greengrass is a service which extends the Amazon Web Services to nodes “on the edge”, where data is gathered as near as possible to their source, processed, and sent to the main cloud. It’s a framework which provides an easy and secure way of communication among edge devices and clouds; it’s a means to run custom AWS Lambda functions on the core devices, to direct AWS services access, or to work offline without Internet connection and synchronize data later, after a connection is available.

While AWS gives a software stack the ability to connect the edge nodes into the AWS IoT Greengrass, they don’t offer the edge computing resources. It’s expected that the SDK and services are installed on the on-premises cloud within a company DC or any preferred edge computing provider.

The OpenNebula team is developing an easy-to-use service appliance for the OpenNebula Marketplace with pre-installed SDK and services for the AWS IoT Greengrass and on-boot automation. This appliance will allow you to easily run Greengrass end-nodes as virtual machines running in your on-premises cloud managed by the OpenNebula. The appliance can act as a Greengrass core, device, or both. Automation inside the appliance always prepares the instance for the specific use on the very first boot. Preparation can be semi-manual when pre-generated certificates are installed in the right places, or fully automatic by passing only the AWS credentials and leave all the registration complexities at the AWS on the appliance itself. 

While you can run Greengrass end-devices on your own cloud, you still need the AWS account and subscription to the AWS IoT Greengrass central cloud service to orchestrate your IoT devices. Learn more on Getting Started with AWS IoT Greengrass.

This is an appliance that will provide AWS IoT Greengrass integration that will allow you to easily extend your OpenNebula cloud to the edge. It doesn’t get any easier!

Watch the complete demo screencast

A Real World Example

We are going to showcase the OpenNebula 5.8 with its new Distributed Data Centers (DDC) feature to automatically deploy an on-demand edge cloud integrated with the AWS IoT Greengrass. From nothing to a ready-to-use geographically-distributed infrastructure with Greengrass nodes as close as possible to their customers, within tens of minutes and automated.

As a cover story for this demo, we have chosen a model company providing monitoring as a service. The company needs a large distributed infrastructure to be able to provide performance metrics for their customers’ websites from different locations from around the world. The goal of this service is to alert the customer when his website is down (completely or just from particular location), if the website is too slow, or if the SSL certificates are near their expiration.

A simplified high-level schema of our model monitoring service:

The schema is divided into 2 parts. General services, which store the measured metrics, generate graphs and dashboards, alert the customers (e-mail, SMS), or trigger custom recovery actions. And, the core distributed monitoring infrastructure (red box) with probes responsible for measuring how each customers’ sites perform. While the general services can run in any single datacenter, the monitoring probes must run from as many different parts of the world as possible.

The demo focuses only on core distributed monitoring infrastructure from the red box.

To implement the distributed monitoring infrastructure, we deploy on our custom edge nodes the AWS IoT Greengrass core services. These provide us with an environment to control and run the monitoring probes, and takes care of the transport of measurements to the high lever processing services. We use Packet Bare Metal Hosting as the Edge Cloud provider for this demo, but any suitable resources (on-premises or public cloud) can be used.

A low-level schema of single monitoring location:

Each monitoring location is implemented as an OpenNebula KVM virtual machine running the AWS IoT Greengrass core with a custom monitoring AWS Lambda function. The Lambda function is waiting for the MQTT message requesting the particular site to monitor. When the request is received, the function triggers simple latency of the customer’s site monitoring via ping command and sends back message with result.

The Demo Monitoring Agent is implemented as a simple console application which periodically broadcasts a MQTT message to monitoring Lambda functions with a request to monitor a particular host, and shows all the received measures from active locations on the terminal. Moreover, it computes minimum, maximum and average from the measured values. See the demo Monitoring Agent run in the video, or screenshot in the section with results.

During the deployment, OpenNebula AWS IoT Greengrass appliances are provided with AWS credentials and the automation inside creates the required entities on the AWS side. For each monitoring location, a dedicated Greengrass group is created:

with a monitoring Lambda function and appropriate message subscriptions.

A Distributed Edge Cloud – fully deployed and configured

The IoT end-nodes infrastructure was created ad-hoc on the bare metal resources kindly provided by the Packet Bare Metal Hosting. OpenNebula DDC ONEprovision tool was used to deploy the physical hosts and configure them as KVM hypervisors. On these virtualization hosts, we ran the OpenNebula AWS IoT Greengrass pre-release appliance. The virtual machine was parameterized with only with AWS credentials and automation inside managed the registration on AWS. To finalize deployment, additional AWS CLI commands configured the message subscriptions and triggered the deployment of our custom monitoring Lambda function.

We were deploying the distributed Greengrass infrastructure across 15 different Packet locations. Deployment was divided into 7 phases, and the time of each phase for each location was measured.

  • Phase 1: Deployment of new server on Packet with just base operating system.
  • Phase 2: Configuration of host with KVM hypervisor services.
  • Phase 3: OpenNebula transfers its drivers and monitors the host.
  • Phase 4: KVM virtual machine with AWS IoT Greengrass services is started.
  • Phase 5: Wait until VM is booted and configured for use as AWS IoT Greengrass core.
  • Phase 6: Configure IoT subscriptions and trigger Lambda function deployment.
  • Phase 7: Wait time until monitoring Lambda function returns first measurements.

Following table shows timings (in seconds) of each phase for all monitoring locations:

LocationPhase1
Deploy
Phase2
Configure
Phase3
Monitor
Phase4
Run VM
Phase5
Bootstrap
Phase6
GG setup
Phase7
Wait
TOTAL
AMS1269
144316341414496
ATL130119810736181212686
BOS25161879835171311880
DFW124923713443191311709
DFW26226714244241312567
EWR166
23197164341214621
FRA2243981111201412411
HKG1265385274832417151066
IAD12481899334171212607
NRT12904722872503916121369
SEA124225515249191312744
SIN126831619460241614895
SJC12703281692603614131093
SYD12635123811152317131327
YYZ12421989934191210617

Description of each Packet location.

Deployment of whole infrastructure from zero to 15 Greengrass locations with working monitoring Lambda function took only 23 minutes 1 second. Infrastructure was destroyed and resources released in only 27 seconds.

There are various circumstances that affect each deployment phase.

  • For phase 1, the deployment times of physical hosts fully depends on the hosting provider, suitable hosts availability, and performance of their internal services. 
  • For the configuration phase 2, the latency between front-end and host, or performance of the nearest OS packages mirrors in the locality is taken into account. 
  • For monitoring phase 3, the time depends on the remote host network latency and throughput. 
  • For phase 4, start of the Greengrass VM depends on the time required to transfer the 596 MiB image to the remote host. To fully boot VM and configure as Greengrass core, the host CPU performance, or AWS API latencies may affect the times. 
  • Phase 6 was done directly from the front-end over AWS CLI (API) – network latencies or AWS API slowness could affect these times. 
  • And, phase 7 fully depends on AWS how quickly they deploy and put the monitoring Lambda function in running state inside our VM with Greengrass core services.

We demonstrated the usability of the demo solution by running our console demo Monitoring Agent mentioned above. We have requested to monitor our website www.opennebula.org and got live latencies measured from different parts of the world in a 1-second interval. Screenshot below captures the state on 2019/07/02 19:37:01.

Based on the values from the screenshot above, we can see the website is performing really excellently with 1ms latencies from the Silicon Valley (SJC1), just fine with 77ms latencies from Boston (BOS2), but the worse latency above 150ms are from Frankfurt (FRA2), Hong Kong (HKG1) or Sydney (SYD1) locations. These values can tell how well the website is performing for customers from these locations, although they aren’t the only factors involved.

Powerful Capability Packaged with Simplicity

For the presented demo model use-case, the distributed infrastructure is crucial to provide the performance metrics from different parts of the world. A simple solution could be done only from a single location, but with limited usability (e.g., customer is running website mainly in Europe, but we measure the site from USA) and prone to incidents in the locality (power outage, network connectivity problems).

The right question here is not why do we need a geographically distributed infrastructure, but how do we build it and how do we design the application running inside. The hardest way is to build a whole solution from scratch, considering all the components you have to select, install, configure, and run (message broker, application server, database, own application).

An easier way is to use a platform providing base application services and focus only on own application logic running inside. One of the solutions to consider is the AWS IoT Greengrass, which can run custom standalone code or AWS Lambda functions on your side, and provides an easy and secure communication between the connected components. The OpenNebula AWS IoT Greengrass appliance will bring a quick and automatic way to join your OpenNebula managed computing resources to the AWS IoT Greengrass cloud service, and to use them to run your own code leveraging the IoT features. On boot, the appliance provides the ready-to-use own Greengrass core or device types.

And the standout figures, proven by this exercise, demonstrate with unmistakable clarity how simple it is to be able to create an AWS IoT Greengrass cloud using OpenNebula with Edge resources distributed globally. If you are looking to utilize AWS IoT Greengrass, the OpenNebula appliance comes packed with everything you need. And with the new innovative OpenNebula edge features, you have the automated capability to take this cloud to the edge with lightning-fast speed and unmatched ease.

For the demo, the pre-release version of the appliance was used. The official release is expected in the following weeks.

vOneCloud / OpenNebula at VMworld 2019 US and Europe

The time is quickly approaching when VMware will be hosting their showcase events in both the US and in Europe. VMworld 2019 is a cornerstone event where everyone with an interest in virtualization and cloud computing will be networking with industry experts, and the OpenNebula team will be in attendance for both:

The OpenNebula team will be running booths within the “Solutions Exchange”, highlighting some of the upcoming “stand-out” features that you’ll be seeing in OpenNebula 5.10 and vOneCloud 3.6.

For example, OpenNebula 5.10 and vOneCloud 3.6 will include support for VMware NSX-V, comprised of main features like:

  • Logical switches
  • Distributed Firewall (DFW)
  • Security Groups (SG)

You’ll soon be able to work with logical switches to create, delete, modify, attach and detach virtualwires through OpenNebula to its instances and templates. Also, microsegmentation will be supported thanks to working in conjunction with Distributed Firewall and Security Groups, enabling the next level of security even for servers on the same network.

Another improvement in the VMware integration will be support for VMRC through Sunstone using HTML5. This will avoid the need to connect directly from the client browser to the ESX running the specific VM to gain console access. Full support for all disk buses (SATA, IDE, SCSI) will be added, as well. And vCenter Driver refactoring, better error handling and performance improvements are all “on the docket”.

There will be plenty to review and highlight. Make sure to take some time to stop by our booth.

In the US / San Francisco event, we will be at Booth # 1669.
Currently, our booth for the EU / Barcelona event is TBD.

June 2019 – OpenNebula Newsletter

Our newsletter contains the highlights of the OpenNebula project and its Community throughout the month.

Technology

The temperatures are rising, for most of us inhabitants of the Northern hemisphere. And so is our focus on building out OpenNebula 5.10!

We continue to make headway in bringing innovative capabilities to the realm of edge computing. One effort that we have “in the works” is creating a new appliance which will soon be available in our Marketplace, and offer the ability to deploy directly into your cloud and connect to AWS IoT Greengass.

We are also working on beefing up or “hooks” system. Our planned improvements will allow the cloud administrator to be able to manage hooks dynamically via oned, (i.e create, delete, run, etc). Also it will be possible to define hooks for API calls without sacrificing any of the current functionality.

Our current efforts also include support for VMware NSX-V. You’ll be seeing features like:

  • Logical switches
  • Distributed Firewall (DFW)
  • Security Groups (SG)

You’ll soon be able to work with logical switches to create, delete, modify, attach and detach virtualwires through OpenNebula to its instances and templates. Also, microsegmentation will be supported thanks to working in conjunction with Distributed Firewall and Security Groups, enabling the next level of security even for servers on the same network.

Community

This month was chock-full of updates and news for the community. Firstly, we announced OpenNebula’s key participation and membership in the Vapor IO “Kinetic Edge Alliance“. This is a focused effort amongst a coalition of Technology and Service providers to understand the needs within our market for Edge Computing, and to push its development forward, making edge resources and capabilities more accessible and valuable.

Additionally, the Linux Journal published an article on OpenNebula as part of their “FOSS Project Spotlight”, outlining some of the key features of OpenNebula 5.8 that are broadening its reach for integration and usability.

We saw a new contribution to the OpenNebula Add-on catalog this month, increasing the Data Storage add-ons available for use by the community. This one is an “unofficial” data storage driver to integrate with the open-source data storage provider, LINBIT.

Earlier we published a comprehensive and useful document outlining how to best scale your OpenNebula environment to ensure prime performance. And as we gain additional insight and measurable results, we continue to share details with the community. You can read the additional specifications in the “Scalability Testing and Tuning Guide“.

Lastly, you will have seen, as you navigate around the OpenNebula.org website that we have brought a fresh, new look to the web. Hopefully, you will find your way around the site without issue. If you have feedback, certainly let us know.

Outreach

The OpenNebulaConf 2019 is quickly approaching, and this month we saw StorPool join the event as a Platinum Sponsor. We give huge thanks to them for helping to make this a must-attend event in the world of Virtualization and Cloud Computing. Be sure to join us in Barcelona, Spain, as the “Early Bird Pricing” is still available. Find out more about the OpenNebulaConf 2019.

We will also be attending both VMworld 2019 events. These are cornerstone events for all interested in virtualization and cloud computing, and OpenNebula Systems will be in attendance of both the US and European conferences in order to shed some light on upcoming “stand-out” features that you’ll be seeing in OpenNebula 5.10 and vOneCloud 3.6.

Make sure to swing by our booths.

We also have two OpenNebula TechDays approaching in September:

Be on the lookout for agendas to be published very soon.

Stay connected!

“linstor_un” — New storage driver for OpenNebula

Not so long ago, the guys from LINBIT presented their new SDS solution – Linstor. This is a fully free storage based on proven technologies: DRBD, LVM, ZFS. Linstor combines simplicity and well-developed architecture, which allows to achieve stability and quite impressive results.

Today I would like to tell you a little about it and show how easy it can be integrated with OpenNebula using linstor_un – a new driver that I developed specifically for this purpose.

Linstor in combination with OpenNebula will allow you to build a high-performance and reliable cloud, which you can easily deploy on your own infrastructure.

Linstor architecture

Linstor is not a file system nor a block storage by itself. Linstor is an orchestrator that provides you an abstraction layer to automate the creation of volumes on LVM or ZFS, and replicate them using DRBD9.

Breaking stereotypes

But wait, DRBD? – Why automate it and how will it work at all?

Let’s remember the past, when DRBD8 was quite popular and its standard usage implied creating one large block device and cutting it to a lot of small pieces using the same LVM. A lot like mdadm RAID-1 but with network replication.

This approach is not without drawbacks, and therefore, with the advent of DRBD9, the principles of building storage have changed. Now a separate DRBD device is created for each new virtual machine.

The approach with independent block devices allows better utilization of space in the cluster, as well as adds a number of additional features. For example, for each such device, you can determine a number of replicas, their location and individual settings. They are easy to create/delete, make snapshots, resize, enable encryption and much more. It is worth noting that DRBD9 also supports quorum.

Resources and backends

Creating a new block device, Linstor places the necessary number of replicas on different nodes in the cluster. Each such replica will be called a DRBD-resource.

There are two types of resources:

  • Data-resource — is a DRBD-device created on a node and backed by LVM or ZFS volume. At the moment there is support for several backends and their number is constantly growing. There is support for LVM, ThinLVM and ZFS, the last two allow you to create and use snapshots.
  • Diskless-resource — is a DRBD-device created on a node without any backend, but allows you to use it like a regular block device, all read/write operations will be redirected to data-replicas. The closest analogue to Diskless-resource is iSCSI LUN.

Each DRBD resource can have up to 8 replicas, and only one of them can be active by default — Primary, all others will be Secondary and they will be impossible to use as long as there is at least one Primary resource existing, so they will just replicate all data between themselves.

By mounting a DRBD-device, it automatically becomes Primary, so even a Diskless resource can be Primary according DRBD terminology.

So why is Linstor needed?

As it delegates all resource-intensive tasks to the kernel, Linstor is essentially a regular Java application that allows you to easily automate the creation and management of DRBD resources. At the same time, each resource created by Linstor will be an independent DRBD cluster, which can work independently of the state of the control-plane and other DRBD resources.

Linstor consists of only two components:

  • Linstor-controller— The main controller that provides an API for creating and managing resources. It also communicates with the satellites, checking the free space and schedules a new resources on them. It is working in single instance and uses a database that can be either internal (H2) or external (PostgreSQL, MySQL, MariaDB)
  • Linstor-satellite— It is installed on all storage nodes and provides an information about free space to the controller, as well as performs the tasks received from the controller to create and delete new volumes and DRBD devices on top of them.

Linstor uses the following key terms:

  • Node— a physical server, where DRBD-resources will be created and used.
  • Storage Pool— LVM or ZFS pool, created on the node, it will be used for place new DRBD-resources. A diskless pool is also possible – this pool can contain only diskless resources.
  • Resource Definition— Resource definition is essentially a prototype of resource that describes its name and all of its properties.
  • Volume Definition— Each resource can consist of several volumes, each volume should be of size. These parameters should be described in volume definition.
  • Resource— The created instance of the block device, each resource must be placed on certain node and in some storage pool.

Linstor installation

I recommend to use Ubuntu for the main system, because it have ready PPA-repository:

add-apt-repository ppa:linbit/linbit-drbd9-stack
apt-get update

Or Debian, where Linstor can be installed from the official repository for Proxmox:

wget -O- https://packages.linbit.com/package-signing-pubkey.asc | apt-key add -
PVERS=5 && echo "deb http://packages.linbit.com/proxmox/ proxmox-$PVERS drbd-9.0" > \
    /etc/apt/sources.list.d/linbit.list
apt-get update
Controller

Everything is simple here:

apt-get install linstor-controller linstor-client
systemctl enable linstor-controller
systemctl start linstor-controller
Storage-nodes

Currently, the Linux kernel comes with an in-tree kernel module DRBD8, unfortunately it does not suit us and we need to install DRBD9:

apt-get install drbd-dkms

As practice shows, most of the difficulties arise precisely because the DRBD8 module is loaded into the system, not DRBD9. However, this is easily checked by executing:

modprobe drbd
cat /proc/drbd

If you see version: 9 then everything is fine, unlike if you see version: 8 then something went wrong and you need to take additional steps to find out where is the problem.

Now install linstor-satelliteand drbd-utils:

apt-get install linstor-satellite drbd-utils
systemctl enable linstor-satellite
systemctl start linstor-satellite

Cluster creation

Storage pools and nodes

Let’s take ThinLVM for a backend because it is the easiest and supports snapshots.
Install lvm2 package, if you haven’t already done this, and create a ThinLVM pool on all the storage nodes:

sudo vgcreate drbdpool /dev/sdb
sudo lvcreate -L 800G -T drbdpool/thinpool

All further actions should be performed directly on the controller:

Add the nodes:

linstor node create node1 127.0.0.11
linstor node create node2 127.0.0.12
linstor node create node3 127.0.0.13

Create storage pools:

linstor storage-pool create lvmthin node1 data drbdpool/thinpool
linstor storage-pool create lvmthin node2 data drbdpool/thinpool
linstor storage-pool create lvmthin node3 data drbdpool/thinpool

Now, let’s check created pools:

linstor storage-pool list

If everything is done correctly, then we should see something like:

+-------------------------------------------------------------------------------------------------------+
| StoragePool | Node  | Driver   | PoolName          | FreeCapacity | TotalCapacity | SupportsSnapshots |
|-------------------------------------------------------------------------------------------------------|
| data        | node1 | LVM_THIN | drbdpool/thinpool |       64 GiB |        64 GiB | true              |
| data        | node2 | LVM_THIN | drbdpool/thinpool |       64 GiB |        64 GiB | true              |
| data        | node3 | LVM_THIN | drbdpool/thinpool |       64 GiB |        64 GiB | true              |
+-------------------------------------------------------------------------------------------------------+

DRBD-resources

Now let’s try to create our new DRBD-resource:

linstor resource-definition create myres
linstor volume-definition create myres 1G
linstor resource create myres --auto-place 2

Check created resources:

linstor resource list

Fine! – we see that the resource was created on the first two nodes. We can also try to create a diskless resource on the third one:

linstor resource create --diskless node3 myres

You can always find this device on the nodes under /dev/drbd1084 or /dev/drbd/by-res/myres/0 path.

This is how Linstor works, you can get more information from the official documentation.

Now I’ll show how to integrate it with OpenNebula.

OpenNebula configuration

I will not go deep into the process of setting up OpenNebula, because all steps are described in detail in official documentation, to which I recommend you to turn. I will tell only about the integration of OpenNebula with Linstor.

linstor_un

To reach this goal, I wrote my own driver — linstor_un. At the moment it is available as a add-on and installed separately.

The entire installation is done on the frontend OpenNebula nodes, and does not require additional actions on the compute nodes.

First of all, we need to make sure that we have jqand linstor-clientinstalled:

apt-get install jq linstor-client

Commad linstor node listmust display a list of nodes. All OpenNebula compute nodes must be added to the Linstor cluster.

Download and install the addon:

curl -L https://github.com/OpenNebula/addon-linstor_un/archive/master.tar.gz | tar -xzvf - -C /tmp

mv /tmp/addon-linstor_un-master/vmm/kvm/* /var/lib/one/remotes/vmm/kvm/

mkdir -p /var/lib/one/remotes/etc/datastore/linstor_un
mv /tmp/addon-linstor_un-master/datastore/linstor_un/linstor_un.conf /var/lib/one/remotes/etc/datastore/linstor_un/linstor_un.conf

mv /tmp/addon-linstor_un-master/datastore/linstor_un /var/lib/one/remotes/datastore/linstor_un
mv /tmp/addon-linstor_un-master/tm/linstor_un /var/lib/one/remotes/tm/linstor_un

rm -rf /tmp/addon-linstor_un-master

Now we need to add it into the OpenNebula config, follow simple steps described herefor achieve this.

Then restart OpenNebula:

systemctl restart opennebula

And add our datastores, system one:

cat > system-ds.conf <<EOT
NAME="linstor-system"
TYPE="SYSTEM_DS"
STORAGE_POOL="data"
AUTO_PLACE="2"
CLONE_MODE="snapshot"
CHECKPOINT_AUTO_PLACE="1"
BRIDGE_LIST="node1 node2 node3"
TM_MAD="linstor_un"
EOT

onedatastore create system-ds.conf

And images one:

cat > images-ds.conf <<EOT
NAME="linstor-images"
TYPE="IMAGE_DS"
STORAGE_POOL="data"
AUTO_PLACE="2"
BRIDGE_LIST="node1 node2 node3"
DISK_TYPE="BLOCK"
DS_MAD="linstor_un"
TM_MAD="linstor_un"
EOT

onedatastore create images-ds.conf
  • Option AUTO_PLACE describes the amount of data-replicas that will be created for each new image in OpenNebula.
  • Option CLONE_MODE describes mechanism for clone images during virtual machine creation, snapshot — will create a snapshot of the image and then deploy a virtual machine from this snapshot, copy — will create a full copy of the image for each virtual machine.
  • In BRIDGE_LIST it is recommended to specify all nodes that will be used to perform image cloning operations.

The full list of supported options is given in project’s READMEfile.

Installation is finished, now you can download some appliance from the official OpenNebula Marketplaceand instantiate VMs from it.

Link to the project:
https://github.com/OpenNebula/addon-linstor_un

OpenNebula Brings Private Edge Cloud Orchestration to Vapor IO’s Kinetic Edge

OpenNebula announced that it’s bringing cloud and edge infrastructure orchestration to Vapor IO’s Kinetic Edge™ platform, the world’s fastest-growing colocation and interconnection system at the edge of the wireless network. OpenNebula’s contribution to the Kinetic Edge will center on bringing a simple and flexible framework for deploying and managing an edge cloud, where cloud resources can be orchestrated. OpenNebula’s support for multiple hypervisor technologies, including lightweight LXD containers, along with its native integration with bare metal resource providers, fits perfectly within the puzzle of making optimal use of edge resources for private cloud infrastructures.

“OpenNebula offers a powerful open source platform for bringing the power of private cloud to the Kinetic Edge”

“The need for cloud-like capabilities is moving quickly to the edge, and OpenNebula is in perfect alignment, as it extends its native cloud management capabilities to easily incorporate edge resources, allowing companies to develop and extend their private clouds deployments on the edge”, says OpenNebula VP of Engineering, Constantino Vazquez.

OpenNebula has also joined Vapor IO’s Kinetic Edge Alliance (KEA) as a technology partner. The KEA is an industry alliance of leading software, hardware, networking and integration companies committed to driving the broad adoption of compute, storage, access and interconnection at the edge of the cellular network, simplifying edge computing for the masses. The KEA is helping to put all of the pieces together to formulate dynamic solutions to bring compute resources closer to the users and to reduce latency, without magnifying the complexity or the cost.

“OpenNebula offers a powerful open source platform for bringing the power of private cloud to the Kinetic Edge,” said Matt Trifiro, CMO of Vapor IO. “Edge environments are particularly challenging for cloud orchestration, as they often span dozens or even hundreds of locations. OpenNebula is one of the few companies with a proven solution for building distributed clouds from core to edge.”

OpenNebula’s recent exercise to launch a video gaming distributed cloud across 17 global locations, has proven fundamental in demonstrating the comprehensive capabilities and resulting ease which OpenNebula provides in addressing the growing demands of edge computing.

View the published Business Wire Press Release.

Carlos José Herrera Matos

Cloud Developer at OpenNebula

Pavel Czerny

Cloud Developer at OpenNebula

StorPool – A Platinum Sponsor for OpenNebulaConf 2019

We are extremely excited to announce that StorPool will be another Platinum Sponsor for the OpenNebulaConf 2019 in Barcelona, Spain on October 21-22, 2019.

Be sure to join us, along with StorPool, in Barcelona for a great event.  The “Early Bird” registration is open. Check out http://2019.opennebulaconf.com/ for more details. We look forward to seeing you in Barcelona!

If you would like to join StorPool as a sponsor, please check out the details.

About StorPool

StorPool is primary block-storage for building and managing public and private clouds. It is intelligent software that runs on standard hardware – servers, drives, networks – and turns them into a high-performance shared storage system. It aggregates the capacity and performance of all drives from many servers into a single block-level shared storage pool.

StorPool is a software-defined storage solution that provides extremely low latency, high performance, online scalability and high availability. StorPool systems start from 1,000,000 IOPS and <0,2 latency. It is a good alternative to traditional storage arrays, all-flash arrays and other inferior storage software.

StorPool is deeply integrated with OpenNebula and is the leading storage solution for building exceptionally fast & reliable OpenNebula clouds.

Stop by our booth and learn more about StorPool and OpenNebula. Our Storage & Cloud Architects will be doing demos all day. If you want to book a personal demo please do not hesitate to contact us at info@storpool.com