11 Jul 2019
This talks about the way Data Science and machine learning is changing the way we look at cricket, World Cup 2019, blah blah, the usual data science ML marketing crap etc.
But did it not predict rain during the World Cup? :P
27 Jun 2019
I have recently got rid of all my major online profiles / presence including Facebook, Quora, Whatsapp, Twitter, etc. Must say I have been more productive since then, not to mention stop ending up becoming the end product of all these so called technology corporations who are “making the world a better place”. It is quite shameful that with zillions of things to (still) solve, our industry chooses to instead use the best minds to predict who to more sell more /insert random stuff here/ to.
I am still on quite a few platforms including
- Youtube - intend to delete it soon, don’t use it anymore
- Telegram - Don’t think I will get off this any time soon
- LinkedIn - For professional connections, but it has turned into more of a recruiter spamware
So, still some rm -rf *
left to do.
30 Mar 2017
If you are working with OpenStack and you do not want a full fledged installation of OpenStack for testing, you can check out DevStack. It is a set of scripts which will setup a test OpenStack environment in a few minutes with one simple script on a single node.
In my case, I used a 12G Virtual Machine with 6 CPU cores, 200G disk and Ubuntu 16.04 as the Linux OS.
- The first step is to add a user with sudo priveleges. I created a user called
stack
here, added it to the sudoers list and switched to that user shell.
$ sudo useradd -s /bin/bash -d /opt/stack -m stack
$ sudo tee <<<"stack ALL=(ALL) NOPASSWD: ALL" /etc/sudoers
$ sudo su - stack
- Next download devstack from the git repository.
$ git clone https://git.openstack.org/openstack-dev/devstack
$ cd devstack
- You now need to add configuration for OpenStack admin password, database password, etc. In the root of the
devstack
directory create a local.conf
file and add the following.
[[local|localrc]]
ADMIN_PASSWORD=admin
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
- Now you are all set to start the installation. Just run
You will see a bunch of packages downloading and getting installed. Wait for the process to exit successfully - and yes, you are done with your OpenStack test installation.
You can open the OpenStack Horizon Dashboard at http://devstack_box_url/.
08 Nov 2016
Very recently I have been working on a platform to orchestrate VM creation and lifecycle management across a cluster of nodes with capacity, metadata, storage and network management. I was exploring stuff like KVM + QEMU and XEN when I read about libvirt and found it perfectly suiting my needs.
To start off with, KVM (Kernel-based Virtual Machine) is a virtualization infrastructure for the Linux kernel that turns it into a hypervisor, a software which enables us to run Virtual Machines on top of a host machine. XEN is another such hypervisor.
libvirt, on the other hand is an abstraction layer on top of the various hypervisor platforms with a nice API available to manage virtualization. We can choose to use any of the hypervisor backends that libvirt supports to create and manage virtual machines. The list of hypervisors supported by libvirt are listed below:
Source: Wikipedia
libvirt has an API in C for development. It also has bindings in other languages like python, perl, ocaml, ruby, java, Go, PHP and C#.
Let’s start off with installing libvirt first. Since the workstations I use are ArchLinux and OSX, I will stick to these two platforms only (the installation instructions for other platforms are easily available).
On ArchLinux, you can install libvirt with a KVM backend by installing the following packages
sudo pacman -S libvirt qemu ebtables dnsmasq bridge-utils openbsd-netcat
By default, KVM is the default driver enabled.
On OSX, you can install libvirt by running
To ensure that the libvirt daemon is running, run the following command
sudo systemctl status libvirtd
Once the libvirt daemon is running you can use the command line client virsh
which comes included in the libvirt package to connect to the daemon.
Once inside the virsh
client, you can run a bunch of commands to consume the API.
- Getting the hypervisor hostname
virsh # hostname
playstation
- Getting the node information
virsh # nodeinfo
CPU model: x86_64
CPU(s): 4
CPU frequency: 1800 MHz
CPU socket(s): 1
Core(s) per socket: 2
Thread(s) per core: 2
NUMA cell(s): 1
Memory size: 8066144 KiB
virsh # version
Compiled against library: libvirt 2.3.0
Using library: libvirt 2.3.0
Using API: QEMU 2.3.0
Running hypervisor: QEMU 2.7.0
You can use the help
command to get a list of all available commands.
In the next post in the libvirt series, I’ll start off with programmatically consuming the libvirt APIs.
05 Nov 2016
Over the past month, I have been exploring various PaaS providers in the market to study how they package and present their PaaS offerings to the users differently, when someone at work pointed out Jujucharms by Canonical (the same company behind Ubuntu). And taking a first look at it - I immediately liked it, just because of the clear and simple way in which they have presented application and service modelling.
Quoting for their website, “Juju is an application and service modelling tool that enables you to quickly model, configure, deploy and manage applications in the cloud with only a few commands. Use it to deploy hundreds of preconfigured services, OpenStack, or your own code to any public or private cloud.”
You can explore the modelling tool yourself here. As an example, I have a WordPress model setup with the Wordpress nodes scaled to 3, one apache node acting as a reverse proxy to the wordpress nodes, a mysql master database connected to the wordpress nodes and a slave mysql instance.

The best thing about this is when you deploy, Juju will take care of all the configuration required at the various nodes. (In the above example, the reverse proxy will be configured to point to the wordpress nodes, the slave database will be configured to replicate from the master database, etc.)
Once done with the modelling, you can deploy the models you generated for your applications and services to any of the public cloud platforms supported by Juju.
