Posts for: #Tech

Heat-kubernetes Demo with Autoscaling

Next week is the Red Hat Summit in Boston, and I’ll be taking part in a Project Atomic presentation in which I will discuss various (well, two) options for deploying Atomic into an OpenStack environment, focusing on my heat-kubernetes templates.

As part of that presentation, I’ve put together a short demonstration video:

This shows off the autoscaling behavior available with recent versions of these templates (and also serves as a very brief introduction to working with Kubernetes).

[read more]

Teach git about GIT_SSL_CIPHER_LIST

Someone named hithard on StackOverflow was trying to clone a git repository via https, and was running into an odd error: “Cannot communicate securely with peer: no common encryption algorithm(s).”. This was due to the fact that the server (openhatch.org) was configured to use a cipher suite that was not supported by default in the underlying SSL library (which could be either OpenSSL or NSS, depending on how git was built).

Many applications allow the user to configure an explicit list of ciphers to consider when negotiating a secure connection. For example, curl has the CURLOPT_SSL_CIPHER_LIST option. This turns out to be especially relevant because git relies on libcurl for all of its http operations, which means all we need to do is (a) create a new configuration option for git, and then (b) pass that value through to libcurl.

[read more]

Suggestions for the Docker MAINTAINER directive

Because nobody asked for it, this is my opinion on the use of the MAINTAINER directive in your Dockerfiles.

The documentation says simply:

The MAINTAINER instruction allows you to set the Author field of the generated images.

Many people end up putting the name and email address of an actual person here. I think this is ultimately a bad idea, and does a disservice both to members of a project that produce Docker images and to people consuming those images.

[read more]

Using tools badly: time shifting git commits with Workinghours

This is a terrible hack. If you are easily offended by bad ideas implemented poorly, move along!

You are working on a wonderful open source project…but you are not supposed to be working on that project! You’re supposed to be doing your real work! Unfortunately, your extra-curricular activity is well documented in the git history of your project for all to see:

Heatmap of original commit history

And now your boss knows why the TPS reports are late. You need workinghours, a terrible utility for doing awful things to your repository history. Workinghours will programatically time shift your git commits so that they appear to have happened within specified time intervals (for example, “between 7PM and midnight”).

[read more]

Booting cloud images with libvirt

Most major distributions now provide “cloud-enabled” images designed for use in cloud environments like OpenStack and AWS. These images are usually differentiated by (a) being relatively small, and (b) running cloud-init at boot to perform initial system configuration tasks using metadata provided by the cloud environment.

Because of their small size and support for automatic configuration (including such useful tasks as provisioning ssh keys), these images are attractive for use outside of a cloud environment. Unfortunately, when people first try to boot them they are met with frustration as first the image takes forever to boot as it tries to contact a non-existent metadata service, and then when it finally does boot they are unable to log in because the images typically only support key-based login.

[read more]

Diagnosing problems with an OpenStack deployment

Diagnosing problems with an OpenStack deployment

I recently had the chance to help a colleague debug some problems in his OpenStack installation. The environment was unique because it was booting virtualized aarch64 instances, which at the time did not have any PCI bus support…which in turn precluded things like graphic consoles (i.e., VNC or SPICE consoles) for the Nova instances.

This post began life as an email summarizing the various configuration changes we made on the systems to get things up and running. After writing it, I decided it presented an interesting summary of some common (and maybe not-so-common) issues, so I am posting it here in the hopes that other folks will find it interesting.

[read more]

Converting hexadecimal ip addresses to dotted quads with Bash

This is another post that is primarily for my own benefit for the next time I forget how to do this.

I wanted to read routing information directly from /proc/net/route using bash, because you never know what may or may not be available in the minimal environment of a Docker container (for example, the iproute package is not installed by default in the Fedora Docker images). The contents of /proc/net/route looks something like:

[read more]

Visualizing Pacemaker resource constraints

If a picture is worth a thousand words, then code that generates pictures from words is worth…uh, anyway, I wrote a script that produces dot output from Pacemaker start and colocation constraints:

https://github.com/larsks/pacemaker-tools/

You can pass this output to graphviz to create visualizations of your Pacemaker resource constraints.

The graph-constraints.py script in that repository consumes the output of cibadmin -Q and can produce output for either start constraints (-S, the default) or colocation constraints (-C).

[read more]

Stupid Pacemaker XML tricks

I’ve recently spent some time working with Pacemaker, and ended up with an interesting collection of XPath snippets that I am publishing here for your use and/or amusement.

Check if there are any inactive resources

pcs status xml |
  xmllint --xpath '//resource[@active="false"]' - >&/dev/null &&
  echo "There are inactive resources"

This selects any resource (//resource) in the output of pcs status xml that has the attribute active set to false. If there are no matches to this query, xmllint exits with an error code.

[read more]

Unpacking Docker images with Undocker

In some ways, the most exciting thing about Docker isn’t the ability to start containers. That’s been around for a long time in various forms, such as LXC or OpenVZ. What Docker brought to the party was a convenient method of building and distributing the filesystems necessary for running containers. Suddenly, it was easy to build a containerized service and to share it with other people.

I was taking a closer at the systemd-nspawn command, which it seems has been developing it’s own set of container-related superpowers recently, including a number of options for setting up the network environment of a container. Like Docker, systemd-nspawn needs a filesystem on which to operate, but unlike Docker, there is no convenient distribution mechanism and no ecosystem of existing images. In fact, the official documentation seems to assume that you’ll be building your own from scratch. Ain’t nobody got time for that…

[read more]