Systemd-nspawn for fun and…well, mostly for fun

systemd-nspawn has been called “chroot on steroids”, but if you think of it as Docker with a slightly different target you wouldn’t be far wrong, either. It can be used to spawn containers on your host, and has a variety of options for configuring the containerized environment through the use of private networking, bind mounts, capability controls, and a variety of other facilities that give you flexible container management.

There are many different ways in which it can be used. I’m going to focus on one that’s a bit of a corner use case that I find particularly interesting. In this article we’re going to explore how we can use systemd-nspawn to spawn lightweight containers for architectures other than that of our host system.

[read more]

Installing pyspatialite on Fedora

If you should find yourself wanting to install pyspatialite on Fedora – perhaps because you want to use the Processing plugin for QGIS – you will first need to install the following dependencies:

  • gcc
  • python-devel
  • sqlite-devel
  • geos-devel
  • proj-devel
  • python-pip
  • redhat-rpm-config

After which you can install pyspatialite using pip by running:

CFLAGS=-I/usr/include pip install pyspatialite

At this point, you should be able to use the “Processing” plugin.

[read more]

Ansible 2.0: New OpenStack modules

This is the second in a loose sequence of articles looking at new features in Ansible 2.0. In the previous article I looked at the Docker connection driver. In this article, I would like to provide an overview of the new-and-much-improved suite of modules for interacting with an OpenStack environment, and provide a few examples of their use.

In versions of Ansible prior to 2.0, there was a small collection of OpenStack modules. There was the minimum necessary to boot a Nova instance:

[read more]

Automatic git cache

This post is in response to a comment someone made on irc earlier today:

[I] would really like a git lookaside cache which operated on an upstream repo, but pulled objects locally when they’re available

In this post I present a proof-of-concept solution to this request. Please note that thisand isn’t something that has actually been used or tested anywhere!

If you access a git repository via ssh, it’s easy to provide a wrapper for git operations via the command= option in an authorized_keys file. We can take advantage of this to update a a local “cache” repository prior to responding to a clone/pull/etc. operation.

[read more]

Stupid Ansible Tricks: Running a role from the command line

When writing Ansible roles I occasionally want a way to just run a role from the command line, without having to muck about with a playbook. I’ve seen similar requests on the mailing lists and on irc.

I’ve thrown together a quick wrapper that will allow you (and me!) to do exactly that, called ansible-role. The --help output looks like this:

usage: ansible-role [-h] [--verbose] [--gather] [--no-gather]
                    [--extra-vars EXTRA_VARS] [-i INVENTORY] [--hosts HOSTS]
                    [--sudo] [--become] [--user USER]
                    role

positional arguments:
  role

optional arguments:
  -h, --help            show this help message and exit
  --verbose, -v
  --gather, -g
  --no-gather, -G
  --extra-vars EXTRA_VARS, -e EXTRA_VARS

Inventory:
  -i INVENTORY, --inventory INVENTORY
  --hosts HOSTS, -H HOSTS

Identity:
  --sudo, -s
  --become, -b
  --user USER, -u USER

Example

If you have a role roles/testrole that contains the following in tasks/main.yml:

[read more]

Bootstrapping Ansible on Fedora 23

If you’ve tried running Ansible against a Fedora 23 system, you may have run into the following problem:

fatal: [myserver]: FAILED! => {"changed": false, "failed": true,
"msg": "/bin/sh: /usr/bin/python: No such file or directory\r\n",
"parsed": false}

Fedora has recently made the switch to only including Python 3 on the base system (at least for the cloud variant), while Ansible still requires Python 2. With Fedora 23, Python 3 is available as /usr/bin/python3, and /usr/bin/python is only available if you have installed the Python 2 interpreter.

[read more]

Ansible 2.0: The Docker connection driver

As the release of Ansible 2.0 draws closer, I’d like to take a look at some of the new features that are coming down the pipe. In this post, we’ll look at the docker connection driver.

A “connection driver” is the mechanism by which Ansible connects to your target hosts. These days it uses ssh by default (which relies on the OpenSSH command line client for connectivity), and it also offers the Paramiko library as an alternative ssh implementation (this was in fact the default driver in earlier versions of Ansible). Alternative drivers offered by recent versions of ansible included the winrm driver, for accessing Windows hosts, the fireball driver, a (deprecated) driver that used 0mq for communication, and jail, a driver for connecting to FreeBSD jails.

[read more]

Running NTP in a Container

Someone asked on IRC about running ntpd in a container on Atomic, so I’ve put together a small example. We’ll start with a very simple Dockerfile:

FROM alpine
RUN apk update
RUN apk add openntpd
ENTRYPOINT ["ntpd"]

I’m using the alpine image as my starting point because it’s very small, which makes this whole process go a little faster. I’m installing the openntpd package, which provides the ntpd binary.

By setting an ENTRYPOINT here, the ntpd binary will be started by default, and any arguments passed to docker run after the image name will be passed to ntpd.

[read more]

Migrating Cinder volumes between OpenStack environments using shared NFS storage

Many of the upgrade guides for OpenStack focus on in-place upgrades to your OpenStack environment. Some organizations may opt for a less risky (but more hardware intensive) option of setting up a parallel environment, and then migrating data into the new environment. In this article, we look at how to use Cinder backups with a shared NFS volume to facilitate the migration of Cinder volumes between two different OpenStack environments.

[read more]

Provider external networks (in an appropriate amount of detail)

In Quantum in Too Much Detail, I discussed the architecture of a Neutron deployment in detail. Since that article was published, Neutron gained the ability to handle multiple external networks with a single L3 agent. While I wrote about that back in 2014, I covered the configuration side of it in much more detail than I discussed the underlying network architecture. This post addresses the architecture side.

The players

This document describes the architecture that results from a particular OpenStack configuration, specifically:

[read more]