You might have heard about Ansible recently, but if you haven’t, we recommend that you have a look at the Ansible official website. To summarize using Ansible’s own description:
“Ansible is a radically simple IT orchestration engine that automates configuration management, application deployment, and many other IT needs”
Abiquo has a native Chef integration (Chef is yet another configuration manager). However, we want to enable our users to work with whichever technology they feel comfortable with, so we’ve contributed to Ansible with the Abiquo dynamic inventory plugin, which makes Abiquo integration with Ansible a piece of cake.
Without going into too much detail, an Ansible inventory is an INI format text file that contains a host catalog needed to execute playbooks (configuration templates) on hosts that are listed in it.
Now, with the Abiquo dynamic inventory plugin for Ansible, a user with access to Abiquo’s API can generate an inventory file that Ansible can use containing the list of all the user’s resources in useful categories.
Some time ago, we had a couple of inquiries about support for running KVM hypervisors with Open vSwitch (OVS). Older versions of OVS included the brcompat module, which made OVS work with regular Linux bridges instead of its own virtual switches. This meant Abiquo would behave as if it were using regular bridges. However, with recent versions of OVS, this brcompat module has been deprecated and it is not working as well as it should. Support for OVS is in our development roadmap, but in the meantime, we will explain how to “hack” an unsupported version of this feature using libvirt “domain events”.
Most of the examples we have provided of how to interact with the Abiquo API are written in Python, so now it’s time to put some rubies in our blog. We will show you how to use Ruby to interact with Abiquo APIs. First we will consume events by connecting to the Outbound API, but we will not process event data, we will just use it as a trigger. When we detect a deploy/undeploy action, we will retrieve more information about deployed VMs by calling the REST API.
When developing Abiquo v2.6, simplicity and functionality was the focus and the workflow acceptance tool has achieved just that.
The new workflow feature will allow you to create your own tools to interact with Abiquo processes at the very same moment they are being executed. In short, during certain Abiquo tasks, Abiquo will send an HTTP request with the task details to a configured endpoint. The task details will contain the links to accept or decline the task. It’s as simple as that.
To give you some background; the 2.4 release really focused on ensuring that we dealt with any scalability barriers detected in our stress tests. With v2.6 the focus has been on including even more extensive integration capabilities, to enable our customers to build innovative services and solutions.
We know that the “one size fits all” approach doesn’t work in the cloud marketplace, so an important goal for Abiquo is to enable our customers to offer a unique combination of services and solutions. With v2.6 we have enabled new integration capabilities, exciting enhancements to our outbound API, further back-up integration, workflow improvements and the opportunity to manage your public cloud through Amazon EC2.
Below is a review of these capabilities:
We recently had the pleasure of hosting a visit from Abiquo; pleasurable because it quickly morphed into a hackathon with Xavier Fernandez which yielded a nicely working integration. A long flight gave me the chance to clean it up and make a Brooklyn pull request (#813) with the net result that Abiquo is now (for all you TL;DR vics) a first-class target for the Brooklyn Catalog.
The session started with the usual describe-and-demo. However it is always the surprises which are most interesting so here’s what surprised me:
- Abiquo has their own API. Is this madness given the momentum of the *-stack bandwagons? I don’t want to have to work with yet another API. But it turns out I don’t have to: they’ve got bindings for jclouds and other client libraries which meant, and as we’ll talk about below, it worked with a lot less fuss than I’m used to.
- It’s a very nice GUI. Pretty is always nice, but more importantly it had clearly been through a few rounds of real usability feedback. The little things that usually irk me had all been solved, like searching for resources … acting on several VM’s at the same time … and hanging on a slow connection. (The down-side is that it’s Flex not HTML5/JS/REST, but Abiquo are on the case so look out for a new version when 3.0 is released later this year.)
- It’s multi-cloud. Multi-hypervisor is old news, but there still are not many cloud platforms which can front many locations. Add in wide-area, use a datastore efficiently (Redis, in Abiquo’s case), and include locations which are someone else’s cloud with a different API, and the list is very short. Location is a first-class concept, and the current snapshot version supports AWS locations. OpenStack and CloudStack targets around the corner, hopefully.
One of the key aspects of every software product is the quality of the product itself. This is often measured by taking the number of bugs, performance metrics, the amount of memory needed to run it, and shown to the world in the form of attractive reports and graphics. However, there is one aspect that is very difficult to measure but extremely important: code quality. How can this be measured? Measuring code quality can be a very subjective process, so instead of relying on reports and tools to generate cool graphics, the smart approach is to integrate the code quality assurance process into the development cycle. Today I’ll explain how adding a code review process to our development cycle has helped us deliver better software.
Now that you can download Abiquo and get a 30-day trial license, you may like to automate the deployment of a new datacenter in your Abiquo environment. I’ll explain here how to automatically deploy Abiquo using Cobbler.
First of all, you need to get Cobbler up and running. I’m not going to explain here how to install it, but you can use the official guide that is very detailed or these instructions for installing on CentOS 6. However, I will give you some advice about Cobbler installation. Review the output of the ‘cobbler check’ command to double check that everything is working fine. You will need to have at least DHCP and TFTP managed by Cobbler.
When you are managing a huge number of virtual machines with a shared datastore, it could be very dangerous for different virtual machines to access the same disk file or volume. This can lead to data consistency problems, and in a worse case scenario, the loss of all information on the disk.
We all know that things that should never happen sometimes do. That’s why it’s good practice to add an extra security layer, to ensure that disk access will always be controlled.
A tool for this purpose is listed on the libvirt webpage:
With libvirt-lock-sanlock, we create a connection between libvirt and sanlock. Whenever libvirt is using a disk or volume, sanlock will create a lock, so other virtual machines that would be using the same disk won’t be able to start up.
It is a pleasure to announce in our technical blog the release of Abiquo 2.4 (internally called King Piccolo). I don't think it's necessary to go over all the features included in this release because you can read about them on our public site and I'm sure that Abiquo's business associates will make an announcement of them on their websites.
However, I do want to highlight some exciting technical solutions included in this release that will move the platform to a new stage of scalability, reliability and robustness (even more so than previous versions).
I think the major change in this release, ahead of other internal improvements, is the full Load Balancing capability of the Abiquo Management Server, enabling you to put Abiquo behind any load balancer and horizontally scale based on the needs of your business.
Implementing load balancing may seem easy, but it isn't when you have a distributed environment orchestrated by queues, which sometimes needs a coherent job order, or when it isn't evident when your scheduler engine must maintain the integrity of resource data, so you must ensure one allocation at a time.