Posts for: #Tech

Installing nova-docker with devstack

This is a long-form response to this question, and describes how to get the nova-docker driver up running with devstack under Ubuntu 14.04 (Trusty). I wrote a similar post for Fedora 21, although that one was using the RDO Juno packages, while this one is using devstack and the upstream sources.

Getting started

We’ll be using the Ubuntu 14.04 cloud image (because my test environment runs on OpenStack).

First, let’s install a few prerequisites:

$ sudo apt-get update
$ sudo apt-get -y install git git-review python-pip python-dev

And generally make sure things are up-to-date:

[read more]

External networking for Kubernetes services

I have recently started running some “real” services (that is, “services being consumed by someone other than myself”) on top of Kubernetes (running on bare metal), which means I suddenly had to confront the question of how to provide external access to Kubernetes hosted services. Kubernetes provides two solutions to this problem, neither of which is particularly attractive out of the box:

  1. There is a field createExternalLoadBalancer that can be set in a service description. This is meant to integrate with load balancers provided by your local cloud environment, but at the moment there is only support for this when running under GCE.

[read more]

Installing nova-docker on Fedora 21/RDO Juno

This post comes about indirectly by a request on IRC in #rdo for help getting nova-docker installed on Fedora 21. I ran through the process from start to finish and decided to write everything down for posterity.

Getting started

I started with the Fedora 21 Cloud Image, because I’m installing onto OpenStack and the cloud images include some features that are useful in this environment.

We’ll be using OpenStack packages from the RDO Juno repository. Because there is often some skew between the RDO packages and the current Fedora selinux policy, we’re going to start by putting SELinux into permissive mode (sorry, Dan):

[read more]

Creating minimal Docker images from dynamically linked ELF binaries

In this post, we’ll look at a method for building minimal Docker images for dynamically linked ELF binaries, and then at a tool for automating this process.

It is tempting, when creating a simple Docker image, to start with one of the images provided by the major distributions. For example, if you need an image that provides tcpdump for use on your Atomic host, you might do something like:

FROM fedora
RUN yum -y install tcpdump

And while this will work, you end up consuming 250MB for tcpdump. In theory, the layering mechanism that Docker uses to build images will reduce the practical impact of this (because other images based on the fedora image will share the common layers), but in practice the size is noticeable, especially if you often find yourself pulling this image into a fresh environment with no established cache.

[read more]

Filtering libvirt XML in Nova

I saw a request from a customer float by the other day regarding the ability to filter the XML used to create Nova instances in libvirt. The customer effectively wanted to blacklist a variety of devices (and device types). The consensus seems to be “you can’t do this right now and upstream is unlikely to accept patches that implement this behavior”, but it sounded like an interesting problem, so…

This is a fork of Nova (Juno) that includes support for an extensible filtering mechanism that is applied to the generated XML before it gets passed to libvirt.

[read more]

Docker vs. PrivateTmp

While working with Docker the other day, I ran into an undesirable interaction between Docker and systemd services that utilize the PrivateTmp directive.

The PrivateTmp directive, if true, “sets up a new file system namespace for the executed processes and mounts private /tmp and /var/tmp directories inside it that is not shared by processes outside of the namespace”. This is a great idea from a security perspective, but can cause some unanticipated consequences.

The problem in a nutshell

  1. Start a Docker container:

[read more]

Running nova-libvirt and nova-docker on the same host

I regularly use OpenStack on my laptop with libvirt as my hypervisor. I was interested in experimenting with recent versions of the nova-docker driver, but I didn’t have a spare system available on which to run the driver, and I use my regular nova-compute service often enough that I didn’t want to simply disable it temporarily in favor of nova-docker.


NB As pointed out by gustavo in the comments, running two neutron-openvswitch-agents on the same host – as suggested in this article – is going to lead to nothing but sadness and doom. So kids, don’t try this at home. I’m leaving the article here because I think it still has some interesting bits.

[read more]

Building a minimal web server for testing Kubernetes

I have recently been doing some work with Kubernetes, and wanted to put together a minimal image with which I could test service and pod deployment. Size in this case was critical: I wanted something that would download quickly when initially deployed, because I am often setting up and tearing down Kubernetes as part of my testing (and some of my test environments have poor external bandwidth).

Building thttpd

My go-to minimal webserver is thttpd. For the normal case, building the software is a simple matter of ./configure followed by make. This gets you a dynamically linked binary; using ldd you could build a Docker image containing only the necessary shared libraries:

[read more]

Accessing the serial console of your Nova servers

One of the new features available in the Juno release of OpenStack is support for serial console access to your Nova servers. This post looks into how to configure the serial console feature and then how to access the serial consoles of your Nova servers.

Configuring serial console support

In previous release of OpenStack, read-only access to the serial console of your servers was available through the os-getConsoleOutput server action (exposed via nova console-log on the command line). Most cloud-specific Linux images are configured with a command line that includes something like console=tty0 console=ttyS0,115200n81, which ensures that kernel output and other messages are available on the serial console. This is a useful mechanism for diagnosing problems in the event that you do not have network access to a server.

[read more]

Cloud-init and the case of the changing hostname

Setting the stage

I ran into a problem earlier this week deploying RDO Icehouse under RHEL 6. My target systems were a set of libvirt guests deployed from the RHEL 6 KVM guest image, which includes cloud-init in order to support automatic configuration in cloud environments. I take advantage of this when using libvirt by attaching a configuration drive so that I can pass in ssh keys and a user-data script.

[read more]