If you believe in de containerised concepts and that containers should be immutable, certain parts of the configuration should be fixed as well. In the context op OpenHab, this means that you can’t add switches, sensors and actuators on a running instance of OpenHab, when you have already deployed your container or unikernel. When the configuration changes, a new container should be built and deployed.
Applying immutable containers means that you’re deploying new images a bit more often than you would do when using mutable containers. And to make sure that this isn’t too painful, I have put Jenkins to this task. Whenever I change the OpenHab configuration, I’ll push it into bitbucket. And as soon as the changed have been stored in Git, I want Jenkins to build and deploy a new OSv Unikernel for me.
Fortunately, Bitbucket has excellent ways to connect to a build service, either through hooks or services. Setting it up is really a breeze. More info on how to configure this can be found in the Atlassian documentation.
My Jenkins now runs a very nice script that takes the latest OpenHab release, add the add-ons I need for my home rig, and merges it with my own configuration. After that, it packages this custom OpenHab installation into an OSv unikernel using the Github project osv-openhab script I made earlier, and deploys it to my XenServer.
But Capstan uses KVM
What I discovered during this process is that the Capstan tool actually spins-up KVM to prepare the ZFS disk image for the virtual machine (more info here). Since Jenkins is a virtual machine in my home lab, it won’t be possible to run KVM from within this virtual machine, as a hypervisor (KVM) within another hypervisor (Xen) is simply not possible.
Or is it? There have been quite some developments in this area and many of the Hypervisor projects have seemed to take this challenge. While competing had on the number of other hypervisors they can run within theirs, this might come in handy for running Capstan from within a VM as well. If XenServer would be able to run KVM, Capstan would be operational, and I would be able to build OSv Unikernels from within a Jenkins virtual machine as well.
XenServer, as it’s name implies is based on the Xen project, and the Xen project has also been quite on the competitive edge when it comes to supporting nested hypervisors. In fact, they have created a Wiki page on this topic, that explains how you could get this nested hypervisor setup to run.
However, Xen is not the same as XenServer. XenServer is a product that a large corporation actually takes responsibility of and even sells commercial support for. That means that they usually can’t take the bleeding edge from the open source repositories and need to stick to what is stable enough to make their business viable.
The nice thing is that the commercial edition of Xen, XenServer, is also released as open source. This means that it’s code can be inspected, and with that it is actually possible to see what parts of Xen have been integrated in XenServer. The good news is that XenServer 6.5 SP1 has these nested VM features built in as well. Justus Beyer has even found out that this feature can actually be switched on quite easily through the XenServer registry. In his blog “Enable experimental nested virtualization in Citrix XenServer 6.5 (SP1)“, he describes how this feature can be switched on for a VM with just one statement:
xe vm-param-set uuid=<UUID of VM> platform:exp-nested-hvm=true
It is quite interesting that you won’t find anything about this feature in Citrix’ documentation, site or forums.
It needs to be a HVM machine!
When I read Justus’ blog, I couldn’t wait to try this out and immediately added that parameters to my Jenkins virtual machine. Full anticipation, I then quicky started it, checked /proc/cpuinfo and… Nothing. No vmx flag.
So what did that mean? Didn’t I have the right version of XenServer, not the right type of virtual machine or perhaps the wrong operating system?
I just thought I’d give a couple of things a try. The easiest was of course to upgrade my XenServer to the latest patch level and have installed all possible patches I could find. Unfortunately… Nothing.
Looking into the operating system itself didn’t make much sense to me, as it’s not the OS that exposes the CPU flags, but it simply enumerates the flags passed to it by the CPU. So I haven’t started looking at different operating systems to find out whether that made a difference.
Then I remembered that the parameter that needed to be switched to true was called “platform:exp-nested-hvm“. It says HVM, not PV, which is the default that is applied to Ubuntu machines when you created them on XenServer. So perhaps I needed to change my machine to a HVM machine?
Fortunately, Randy Mcanally has build a terrific script that can do exactly this in a jiffy. This script was built for a much earlier version of Xen, but it still works perfectly on XenServer 6.5 as well.
So, downloaded the vmtool file in the /tmp directory of my XenServer and converted my Jenkins VM to a HVM with
./vmtool.pl --cmd=hvmtopv --uuid=<UUID of VM>
and rebooted, and… Success!!!
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Core(TM) i5-3550 CPU @ 3.30GHz
stepping : 9
microcode : 0x17
cpu MHz : 3300.088
cache size : 6144 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36
clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl
pni pclmulqdq <span style="text-decoration: underline;"><strong>vmx</strong></span> ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer
aes rdrand hypervisor lahf_lm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep
bogomips : 6600.17
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
So apparently key to success is that the machine is HVM based and not PV based.
As a result of the availability of vmx within the virtual machine, I was now able to start KVM and Capstan from within the Jenkins machine without getting nasty messages anymore.
Allow Jenkins to launch KVM
When I triggered the build script from Jenkins, I suddenly didn’t have so much success anymore and Capstan reported: “Failed to intialize kvm hypervisor: permission denied”.
Eventually it seemed that the Jenkins users was not entitled to use the /dev/kvm device as you needed to be part of group kvm or root. The first options seemed nicest, so I added Jenkins to the kvm group with:
sudo usermod -a -G kvm jenkins
With that, the build process now runs perfectly. And when I check-in new configuration files into the Git repository, it only takes a few minutes before a new Unikernel that incorporates the changes is running on my XenServer machine.
Have a look at what this looks like in the video below: