I work for an organization that follows the common model of assigning people systematically generated user ids. Like most technically inclined employees of this organization, I have local accounts on my workstation that don’t bear any relation to the generated account ids. For the most part this isn’t a problem, except that our organization uses Kerberos to authenticate access to a variety of resources (such as the mailserver and a variety of web applications).
Hi there. Welcome to blog.oddbit.com
! I post articles here on a variety of technical topics. Mostly I’m posting for myself (writing things up helps me remember them in the future), but I always hope the content I put here is helpful to someone else. If you find something here useful and want to say thanks, feel free to buy me a coffee!
OpenStack Networking without DHCP
In an OpenStack environment, cloud-init generally fetches information from the metadata service provided by Nova. It also has support for reading this information from a configuration drive, which under OpenStack means a virtual CD-ROM device attached to your instance containing the same information that would normally be available via the metadata service.
It is possible to generate your network configuration from this configuration drive, rather than relying on the DHCP server provided by your OpenStack environment. In order to do this you will need to make the following changes to your Nova configuration:
Heat-kubernetes Demo with Autoscaling
Next week is the Red Hat Summit in Boston, and I’ll be taking part in a Project Atomic presentation in which I will discuss various (well, two) options for deploying Atomic into an OpenStack environment, focusing on my heat-kubernetes templates.
As part of that presentation, I’ve put together a short demonstration video:
This shows off the autoscaling behavior available with recent versions of these templates (and also serves as a very brief introduction to working with Kubernetes).
Teach git about GIT_SSL_CIPHER_LIST
Someone named hithard on StackOverflow was trying to clone a git repository via https, and was running into an odd error: “Cannot communicate securely with peer: no common encryption algorithm(s).”. This was due to the fact that the server (openhatch.org
) was configured to use a cipher suite that was not supported by default in the underlying SSL library (which could be either OpenSSL or NSS, depending on how git was built).
Many applications allow the user to configure an explicit list of ciphers to consider when negotiating a secure connection. For example, curl has the CURLOPT_SSL_CIPHER_LIST option. This turns out to be especially relevant because git relies on libcurl for all of its http operations, which means all we need to do is (a) create a new configuration option for git, and then (b) pass that value through to libcurl.
Suggestions for the Docker MAINTAINER directive
Because nobody asked for it, this is my opinion on the use of the
MAINTAINER
directive in your Dockerfiles.
The documentation says simply:
The MAINTAINER instruction allows you to set the Author field of the generated images.
Many people end up putting the name and email address of an actual person here. I think this is ultimately a bad idea, and does a disservice both to members of a project that produce Docker images and to people consuming those images.
Using tools badly: time shifting git commits with Workinghours
This is a terrible hack. If you are easily offended by bad ideas implemented poorly, move along!
You are working on a wonderful open source project…but you are not supposed to be working on that project! You’re supposed to be doing your real work! Unfortunately, your extra-curricular activity is well documented in the git history of your project for all to see:
And now your boss knows why the TPS reports are late. You need workinghours, a terrible utility for doing awful things to your repository history. Workinghours will programatically time shift your git commits so that they appear to have happened within specified time intervals (for example, “between 7PM and midnight”).
Booting cloud images with libvirt
Most major distributions now provide “cloud-enabled” images designed for use in cloud environments like OpenStack and AWS. These images are usually differentiated by (a) being relatively small, and (b) running cloud-init at boot to perform initial system configuration tasks using metadata provided by the cloud environment.
Because of their small size and support for automatic configuration (including such useful tasks as provisioning ssh keys), these images are attractive for use outside of a cloud environment. Unfortunately, when people first try to boot them they are met with frustration as first the image takes forever to boot as it tries to contact a non-existent metadata service, and then when it finally does boot they are unable to log in because the images typically only support key-based login.
Diagnosing problems with an OpenStack deployment

I recently had the chance to help a colleague debug some problems in his OpenStack installation. The environment was unique because it was booting virtualized aarch64 instances, which at the time did not have any PCI bus support…which in turn precluded things like graphic consoles (i.e., VNC or SPICE consoles) for the Nova instances.
This post began life as an email summarizing the various configuration changes we made on the systems to get things up and running. After writing it, I decided it presented an interesting summary of some common (and maybe not-so-common) issues, so I am posting it here in the hopes that other folks will find it interesting.
Converting hexadecimal ip addresses to dotted quads with Bash
This is another post that is primarily for my own benefit for the next time I forget how to do this.
I wanted to read routing information directly from /proc/net/route
using bash
, because you never know what may or may not be available
in the minimal environment of a Docker container (for example, the
iproute
package is not installed by default in the Fedora Docker
images). The contents of /proc/net/route
looks something like:
Visualizing Pacemaker resource constraints
If a picture is worth a thousand words, then code that generates pictures from words is worth…uh, anyway, I wrote a script that produces dot output from Pacemaker start and colocation constraints:
https://github.com/larsks/pacemaker-tools/
You can pass this output to graphviz to create visualizations of your Pacemaker resource constraints.
The graph-constraints.py
script in that repository consumes the
output of cibadmin -Q
and can produce output for either start
constraints (-S
, the default) or colocation constraints (-C
).