I ran a Google Hangout this morning on Deploying with Heat. You
can find the slides for the presentation on line here, and the
Heat templates (as well as slide sources) are available on
github.
If you have any questions about the presentation, please feel free to
ping me on irc (larsks).
I spent some time today learning about Heat autoscaling groups,
which are incredibly nifty but a little opaque from the Heat command
line, since commands such as heat resource-list don’t recurse into
nested stacks. It is possible to introspect these resources (you can
pass the physical resource id of a nested stack to heat resource-list, for example)…
…but I really like visualizing things, so I wrote a quick hack
called dotstack that will generate dot language output from a
Heat stack. You can process this with Graphviz to produce output
like this, in which graph nodes are automatically colorized by
resource type:
While writing that article, I encountered a number of bugs in the
Docker plugin and elsewhere. I’ve submitted patches for most of the
issues I encountered:
I have been looking at both Docker and OpenStack recently. In my last
post I talked a little about the Docker driver for Nova; in
this post I’ll be taking an in-depth look at the Docker plugin for
Heat, which has been available since the Icehouse release but is
surprisingly under-documented.
The release announcement on the Docker blog includes an
example Heat template, but it is unfortunately grossly inaccurate and
has led many people astray. In particular:
In order for WaitCondition resources to operate correctly in Heat, you
will need to make sure that that you have:
Created the necessary Heat domain and administrative user in
Keystone,
Configured appropriate values in heat.conf for
stack_user_domain, stack_domain_admin, and
stack_domain_admin_password.
Configured an appropriate value in heat.conf for
heat_waitcondition_server_url. On a single-system install this
will often be pointed by default at 127.0.0.1, which, hopefully for
obvious reasons, won’t be of any use to your Nova servers.
Enabled the heat-api-cfn service,
Configured your firewall to permit access to the CFN service (which
runs on port 8000).
Steve Hardy has a blog post on stack domain users that goes into
detail on configuring authentication for Heat and Keystone.
I’ve been playing with Docker a bit recently, and decided to take
a look at the nova-docker driver for OpenStack.
The nova-docker driver lets Nova, the OpenStack Compute service,
spawn Docker containers instead of hypervisor-based servers. For
certain workloads, this leads to better resource utilization than you
would get with a hypervisor-based solution, while at the same time
givin you better support for multi-tenancy and flexible networking
than you get with Docker by itself.
Until recently I had a bcache based setup on my laptop, but when
forced by circumstance to reinstall everything I spent some time
looking for alternatives that were less disruptive to configure on an
existing system.
I came across Richard Jones’ article discussing the recent work to
integrate dm-cache into LVM. Unlike bcache and unlike using
dm-cache directly, the integration with LVM makes it easy to
associate devices with an existing logical volume.
I have put together a small tool called lvcache that simplies the
process of:
This article discusses four ways to make a Docker container appear on
a local network. These are not suggested as practical solutions, but
are meant to illustrate some of the underlying network technology
available in Linux.
For a small project I’m working on I needed to attach a few buttons to
a Raspberry Pi and have some code execute in response to the
button presses.
Normally I would reach for Python for a simple project like this,
but constraints of the project made it necessary to implement
something in C with minimal dependencies. I didn’t want to write
something that was tied closely to my project…
…so I ended up writing gpio-watch, a simple tool for connecting
shell scripts (or any other executable) to GPIO events. There are a
few ways to interact with GPIO on the Raspberry Pi. For the fastest
possible performance, you will need to interact directly with the
underlying hardware using, e.g., something like direct register
access. Since I was only responding to button presses I opted
to take advantage of the GPIO sysfs interface, which exposes
the GPIO pins via the filesystem.