Some time ago, I wrote a blog post about Unikernels and was quite enthousiast about the combination of OSv and XenServer. I had Cassandra running in an OSv container in a Jiffy and think Unikernels on hypervisors are quite better to manage and maintain than Docker images. If you want to know more about why I think they could be considered the successor of Docker, please do feel free to read my blog “Minimalist Cassandra VM using an OSv Unikernel“. In this blog I have written about my journey to get a unikernel Cassandra running, but I’m also trying to outline the pros and cons of Unikernels in general.
A few weeks after I wrote that blog, I received a comment from Tim (hello Tim 😉 who asked me what it would take to boot another application. Fortunately he put his question in the context of Jetty, which is an easy one. Jetty is actually one of the apps that is included in Cloudius’ app library and can be spun up as easily as Cassandra.
But I did take Tim’s question as a challenge. What if you wanted to run an application that is not in the library. Even worse, what if that application is not even a Java application, but a native Linux application? You have to know that Java is a first class citizen on OSv and Java applications are really easy to run on OSv. As I also wanted to get an MQTT broker running for my home automation projects, I thought it would be a nice challenge to get Mosquitto to run as a Unikernel. And eventually of course to get it to run on my XenServer.
The first thing that needed to be done was to setup a Linux machine where I could spin-up Unikernels and experiment with them. To get things up and running, I installed QEMU and Virtual Box on my Linux machine. Later I realized that I haven’t been using QEMU so much, and the installation of just Virtual Box would have sufficed as well.
Of course I also needed to be able to build Mosquitto, so I’ve done the necessary preparations to get a build environment ready
sudo apt-get update
sudo apt-get install build-essential python quilt devscripts python-setuptools python3
sudo apt-get install cmake libssl-dev uuid-dev xsltproc docbook-xsl libc-ares-dev daemon
If you want to run a native app on OSv, you can’t just plug the binaries into a VM image. OSv
based on FreeBSD is not Linux. It’s a write from scratch with some code from FreeBSD, mainly ZFS, and posix, but OSv is ABI-compatible with Linux. This means that it should be able to run executable code compiled for Linux, provided this code does not use one of a few features not supported by OSv – such as fork(). You can compile your existing Linux application with its normal build process, and run the resulting Linux executable on OSv. There’s one snag, though: OSv cannot currently run “normal” (fixed-position) executables, and can only run a relocatable shared-object (a file normally given a “.so” extension). It looks for the main() function in that shared object, and runs it.
Converting a compilation process to produce a shared-object instead of an executable is fairly trivial: All you need to do is to add -fPIC option to the compilation of each source file (to produce an object file with position-independent code) and to add -shared to the linking stage, to produce a shared-object instead of an executable. That’s it, and you can run the result on OSv.
So, to run Mosquitto on OSv, you need the source code and recompile it. So I started by creating a fork of the Eclipse Mosquitto project from http://git.eclipse.org/c/mosquitto/org.eclipse.mosquitto.git.
The nice thing is that Mosquitto’s make files already take care of the -fPIC flag for parts of the application, as part of Mosquitto is actually compiled as a shared object. However, the broker itself isn’t compiled with that flag. To enforce it, I have added -fPIC to the BROKER_CFLAGS on line 115 of the config.mk file.
The build process actually needs to make an .so file version of the broker (which is located in the src directory). This is done by adding a few lines (14 and 15) to the src/Makefile, which basically does the same as for executable version, except that it includes a -shared flag. To get this executed when the “make all” command is executed, I have also added the mosquitto.so to the “all” target in line 6 and 8. I could even have removed the creation of the regular Linux ELF binary, but I kept it in the Makefile, as I thought I might come in handy while debugging.
Besides the Makefile, the config file also needs to be changed. Mosquitto usually tries to run as a non-root user, but as the concept of users doesn’t exist in a Unikernel (and OSv), it needs to run as root. Hence the line “user root” in mosquitto.conf.
Preparing OSv with Capstan
The easiest way to build an OSv image is using Capstan. Capstan works quite similar to Mirantis, except that it takes a Capstanfile instead of a Dockerfile with instructions for its build process. Capstan takes care of building the application (which is why you haven’t seen me run “make” in the previous section) and then turning it into an image.
The fact that Capstan builds your application as well, is very convenient, because if there’s only one application to be started to build the image, it will make configuring CI tools such as Jenkins a walk in the park.
To get Capstan running, you’ll have to download it first. Although Cloudius has prepared a bash script for that on https://raw.githubusercontent.com/cloudius-systems/capstan/master/scripts/download, I have chosen not to use that, as it installs Capstan in a bin directory under your home dir. Perhaps I’m a bit old-school, but I prefer to have it in /usr/bin. So I’ve downloaded it and installed it myself:
chmod a+x capstan
mv capstan /usr/bin
After installing Capstan, you need a Capstanfile to get it to build an image. So I have created the Capstanfile below:
cmdline: /mosquitto.so -c /etc/mosquitto.conf
The first line indicates which base image should be used as the image’s foundation. Capstan will download this automatically from the Capstan repository.
The second line says that when the OSv image is launched, is should look for mosquitto.so. In that file, OSv will invoke function main() and pass -c and /etc/mosquitto.conf as argv parameters.
The third line indicates how capstan should get the application to build, which is by running make.
And the last set of lines indicate which files need to be copied to the OSv image. To get the Mosquitto broker to run, you need its shared objects file (libmosquitto.so), the broker (mosquitto.so) and a configuration file (mosquitto.conf). Please note that the Capstanfile is formatted as a YAML file. So, note that the two spaces in front of the files are relevant.
When the Capstanfile is in place. You can run the build command by executing
capstan build -v -p vbox
The -v parameter indicates “verbose” and will make the build process show which files it puts where. It’s convenient to give a little bit of insight in the “magic”. The “-p vbox” parameter indicates that the build process should generate a vbox image.
The build process will trigger the build (make all) of Mosquitto, download the base Capstan image and insert the result of the build into the OSv base image to complete the Mosquitto image.
Running the build command will result in:
154 B / 154 B [====================================================] 100.00 %
20.15 MB / 20.15 MB [==============================================] 100.00 %
Waiting for connection from host...
src/mosquitto.so --> /mosquitto.so
lib/libmosquitto.so.1 --> /usr/lib/libmosquitto.so.1
mosquitto.conf --> /etc/mosquitto.conf
Once the image has been build, you can run it using
capstan run -v -p vbox
When you run it, you will see:
Created instance: osv-mosquitto
1440066677: mosquitto version 1.4.2 (build date 2015-08-20 12:06:01+0200) starting
1440066677: Config loaded from /etc/mosquitto.conf.
1440066677: Opening ipv4 listen socket on port 1883.
1440066677: Warning: Mosquitto should not be run as root/administrator.
With that, the Mosquitto broker has been launched and is listening on port 1883 for inbound connections from clients.
This has gone really quick and seemless, but to show that it’s actually running in VirtualBox, you could launch the VirtualBox client and see something similar to this as a result:
Moving the Broker to XenServer
XenServer doesn’t eat the QEMU images produced by Capstan, so they need to be converted before they can be imported by XenServer. To convert them you can use the vboxmanage command:
vboxmanage clonehd ~/.capstan/repository/osv-mosquitto/osv-mosquitto.vbox ~/.capstan/repository/osv-mosquitto/osv-mosquitto.vhd --format VHD
This will result in a VHD file in the ~/.capstan/repository/osv-mosquitto/ directory, which can be imported by XenServer.
After importing the image file, the end-result looks like this:
TL;DR – Don’t want to know, just give me the image
If you just want to get a Mosquitto unikernel image running and have all the build essentials and Virtual Box installed already, just clone the git repository where I pushed the OSV Mosquitto version into and run it:
git clone https://github.com/jpenninkhof/osv-mosquitto.git
capstan build -v -p vbox
capstan run -v -p vbox (optional)
And your image will be in ~/.capstan/repository/osv-mosquitto
Bonus: Node-RED Unikernel
I have also created the Capstan build files for a Node-RED Unikernel (based on Node.js). Please find the instructions to build the Node-RED OSv Unikernel below:
git clone https://github.com/jpenninkhof/osv-node-red.git
capstan build -v -m 1024M
This should have built the Unikernel. To try it out using QEMU, run the command below:
capstan run -f 1880:80
With that, Node-RED is launched and you should see something similar to the lines below in your console:
Created instance: osv-node-red
Welcome to Node-RED
20 Aug 20:59:04 - [info] Node-RED version: v0.11.1-git
20 Aug 20:59:04 - [info] Node.js version: v0.10.33
20 Aug 20:59:04 - [info] Loading palette nodes
20 Aug 20:59:08 - [warn] ------------------------------------------
20 Aug 20:59:08 - [warn] Failed to register 1 node type
20 Aug 20:59:08 - [warn] Run with -v for details
20 Aug 20:59:08 - [warn] ------------------------------------------
20 Aug 20:59:08 - [info] Settings file : /node-red/settings.js
20 Aug 20:59:08 - [info] User directory : /node-red/.node-red
20 Aug 20:59:08 - [info] Flows file : /node-red/.node-red/flows_osv.local.json
20 Aug 20:59:08 - [info] Server now running at http://127.0.0.1:80/
20 Aug 20:59:08 - [info] Creating new flow file
20 Aug 20:59:08 - [info] Starting flows
20 Aug 20:59:08 - [info] Started flows
if you point your browser at http://localhost:1880, you should be able to see the Node-RED user interface.To turn the Node-RED image into a VMDK file for Xen or VMWare, run the command below:
qemu-img convert -O vmdk ~/.capstan/repository/osv-node-red/osv-node-red.qemu ~/.capstan/repository/osv-node-red/osv-node-red.vmdk
After running this command, you can find your VMDK file in the ~/.capstan/repository/osv-node-red directory.
Update August 24, 2015: Received a tweet from Cloudius Systems that OSv isn’t based on FreeBSD, but is actually a write from scratch. Changed it accordingly. Thanks for letting me know and for the effort of writing in! 🙂 Btw, awesome job you’re doing. I’m moving more and more of my home apps to OSv Unikernels (my most recent victim was OpenHab, which will also be on Github soon), and they’re running far more reliable than previously under Docker. My wife and kids are also very happy that the light switches actually work. Thanks for that. I hope I will get the opportunity to show my colleagues and customers the benefits of Unikernels one day as well.
Be First to Comment