Gateway for exestack?

One of the limitations of LambdaCloud was that it only reacted to events and the execution to that was asynchronous. One of the most common use cases while writing applications is writing REST APIs over HTTP where you expect a response to the request on the same connection.

I would like exestack to support this particular use case, so I am thinking of writing a fourth module called Gateway just in front of Dockyard (the container runtime platform). This would basically mean code in Dockyard can be executed via two channels: Eventengine and Gateway.

Some features off the top of my mind that can be added to Gateway:

  • authentication
  • rate limiting
  • documentation generation

Initial ideas for exestack

So I have just kicked off writing an open source event driven code executor platform called exestack on Github. The idea behind the platform is to execute a piece of code on the cloud triggered by an event happening anywhere in the physical or the virtual world. The idea is very much similar to my previous implementation of LambdaCloud (which is closed source), which executes functions defined in python, ruby or php in response to events like database or file system writes.

So you can say this is some form of a re-write of the LambdaCloud platform, but with the following differences:

  • I want to have a generic core web-hooks like platform which can collect all the events and probably queue them. The events handler would then make some form of a RPC or REST API call.

  • I do not want to restrict code to be defined as functions in a few available runtimes. Rather I want to standardize the runtime - so the best option is to abstract it out using containers (docker comes to my mind here). So I’ll provide a platform where the user can push code in the form of docker images and the platform should be able to spin off containers from these images and run them.

  • I want to provide some sort of bridge layer where the actual RPC or the REST call happens from the web hooks platform to the container.

Basically with these concepts in mind, I have come up with the following modules:

  • controller - To manage all the meta-data like user and project information, event wiring, etc.

  • eventengine - To accept all the events from the physical and the virtual world and react to them according to the event wiring defined in the controller. (I have still not figured out how this wiring should look like)

  • dockyard - Runtime to run the user defined code. This will basically use some form of docker / kubernetes / mesos - yet to come up with the best solutions.

DevStack - OpenStack test environment for the impatient

If you are working with OpenStack and you do not want a full fledged installation of OpenStack for testing, you can check out DevStack. It is a set of scripts which will setup a test OpenStack environment in a few minutes with one simple script on a single node.

In my case, I used a 12G Virtual Machine with 6 CPU cores, 200G disk and Ubuntu 16.04 as the Linux OS.

  • The first step is to add a user with sudo priveleges. I created a user called stack here, added it to the sudoers list and switched to that user shell.
$ sudo useradd -s /bin/bash -d /opt/stack -m stack
$ sudo tee <<<"stack ALL=(ALL) NOPASSWD: ALL" /etc/sudoers
$ sudo su - stack
  • Next download devstack from the git repository.
$ git clone https://git.openstack.org/openstack-dev/devstack
$ cd devstack
  • You now need to add configuration for OpenStack admin password, database password, etc. In the root of the devstack directory create a local.conf file and add the following.
[[local|localrc]]
ADMIN_PASSWORD=admin
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
  • Now you are all set to start the installation. Just run
$ ./stack.sh

You will see a bunch of packages downloading and getting installed. Wait for the process to exit successfully - and yes, you are done with your OpenStack test installation.

You can open the OpenStack Horizon Dashboard at http://devstack_box_url/.

Getting started with libvirt

Very recently I have been working on a platform to orchestrate VM creation and lifecycle management across a cluster of nodes with capacity, metadata, storage and network management. I was exploring stuff like KVM + QEMU and XEN when I read about libvirt and found it perfectly suiting my needs.

To start off with, KVM (Kernel-based Virtual Machine) is a virtualization infrastructure for the Linux kernel that turns it into a hypervisor, a software which enables us to run Virtual Machines on top of a host machine. XEN is another such hypervisor.

libvirt, on the other hand is an abstraction layer on top of the various hypervisor platforms with a nice API available to manage virtualization. We can choose to use any of the hypervisor backends that libvirt supports to create and manage virtual machines. The list of hypervisors supported by libvirt are listed below:

Source: Wikipedia

libvirt has an API in C for development. It also has bindings in other languages like python, perl, ocaml, ruby, java, Go, PHP and C#.

Let’s start off with installing libvirt first. Since the workstations I use are ArchLinux and OSX, I will stick to these two platforms only (the installation instructions for other platforms are easily available).

On ArchLinux, you can install libvirt with a KVM backend by installing the following packages

sudo pacman -S libvirt qemu ebtables dnsmasq bridge-utils openbsd-netcat

By default, KVM is the default driver enabled.

On OSX, you can install libvirt by running

brew install libvirt

To ensure that the libvirt daemon is running, run the following command

sudo systemctl status libvirtd

Once the libvirt daemon is running you can use the command line client virsh which comes included in the libvirt package to connect to the daemon.

Once inside the virsh client, you can run a bunch of commands to consume the API.

  • Getting the hypervisor hostname
virsh # hostname
playstation
  • Getting the node information
virsh # nodeinfo 
CPU model:           x86_64
CPU(s):              4
CPU frequency:       1800 MHz
CPU socket(s):       1
Core(s) per socket:  2
Thread(s) per core:  2
NUMA cell(s):        1
Memory size:         8066144 KiB
  • Getting the version
virsh # version
Compiled against library: libvirt 2.3.0
Using library: libvirt 2.3.0
Using API: QEMU 2.3.0
Running hypervisor: QEMU 2.7.0

You can use the help command to get a list of all available commands.

In the next post in the libvirt series, I’ll start off with programmatically consuming the libvirt APIs.

Have you checked out Jujucharms?

Over the past month, I have been exploring various PaaS providers in the market to study how they package and present their PaaS offerings to the users differently, when someone at work pointed out Jujucharms by Canonical (the same company behind Ubuntu). And taking a first look at it - I immediately liked it, just because of the clear and simple way in which they have presented application and service modelling.

Quoting for their website, “Juju is an application and service modelling tool that enables you to quickly model, configure, deploy and manage applications in the cloud with only a few commands. Use it to deploy hundreds of preconfigured services, OpenStack, or your own code to any public or private cloud.”

You can explore the modelling tool yourself here. As an example, I have a WordPress model setup with the Wordpress nodes scaled to 3, one apache node acting as a reverse proxy to the wordpress nodes, a mysql master database connected to the wordpress nodes and a slave mysql instance.

Wordpress model

The best thing about this is when you deploy, Juju will take care of all the configuration required at the various nodes. (In the above example, the reverse proxy will be configured to point to the wordpress nodes, the slave database will be configured to replicate from the master database, etc.)

Once done with the modelling, you can deploy the models you generated for your applications and services to any of the public cloud platforms supported by Juju.

Public clouds supported by Juju