I was the presenter for this morning’s RDO hangout, where I ran through a simple demonstration of setting up a multinode OpenStack deployment using packstack.
The slides are online here.
Here’s the video (also available on the event page):
Hi there. Welcome to blog.oddbit.com
! I post articles here on a variety of technical topics. Mostly I’m posting for myself (writing things up helps me remember them in the future), but I always hope the content I put here is helpful to someone else. If you find something here useful and want to say thanks, feel free to buy me a coffee!
I was the presenter for this morning’s RDO hangout, where I ran through a simple demonstration of setting up a multinode OpenStack deployment using packstack.
The slides are online here.
Here’s the video (also available on the event page):
This is just here as a reminder for me:
An OVS interface has a variety of attributes associated with it, including an
external-id
field that can be used to associate resources outside of
OpenVSwitch with the interface. You can view this field with the following
command:
$ ovs-vsctl --columns=name,external-ids list Interface
Which on my system, with a single virtual instance, looks like this:
# ovs-vsctl --columns=name,external-ids list Interface
.
.
.
name : "qvo519d7cc4-75"
external_ids : {attached-mac="fa:16:3e:f7:75:b0", iface-id="519d7cc4-7593-4944-af7b-4056436f2d66", iface-status=active, vm-uuid="0330b084-03db-4d42-a231-2cd6ad89515b"}
.
.
.
Note the information contained here:
I work with several different OpenStack installations. I usually work
on the command line, sourcing in an appropriate stackrc
with
credentials as necessary, but occasionally I want to use the dashboard
for something.
For all of the deployments with which I work, the keystone endpoint is on the same host as the dashboard. So rather than trying to remember which dashboard url I want for the environment I’m currently using on the command line, I put together this shell script:
When you boot a virtual instance under OpenStack, your instance has access to certain instance metadata via the Nova metadata service, which is canonically available at http://169.254.169.254/.
In an environment running Neutron, a request from your instance must traverse a number of steps:
When there are problem accessing the metadata, it can be helpful to verify that the metadata service itself is configured correctly and returning meaningful information.
This Wednesday, January 15, at 14:00 UTC (that’s 9AM US/Eastern, or
date -d "14:00 UTC"
in your local timezone) I will be helping out
with the
RDO bug triage day. We’ll be trying to validate all the
untriaged bugs opened against RDO.
Feel free to drop by on #rdo
and help out or ask questions.
I’ve put together a few tools to help gather information about your Neutron and network configuration and visualize it in different ways. All of these tools are available as part of my neutron-diag repository on GitHub.
In this post I’m going to look at a tool that will help you visualize the connectivity of network devices on your system.
There are a lot of devices involved in your Neutron network configuration. Information originating in one of your instances has two traverse at least seven network devices before seeing the light of day. Understanding how everything connects is critical if you’re trying to debug problems in your envionment.
Heat is a template-based orchestration mechanism for use with OpenStack. With Heat, you can deploy collections of resources – networks, servers, storage, and more – all from a single, parameterized template.
In this article I will introduce Heat templates and the heat
command
line client.
Because Heat began life as an analog of AWS CloudFormation, it supports the template formats used by the CloudFormation (CFN) tools. It also supports its own native template format, called HOT (“Heat Orchestration Templates”). In this article I will be using the HOT template syntax, which is fully specified on the OpenStack website.
I just recently learned about the signalfd(2)
system call, which was
introduced to the Linux kernel back in 2007:
signalfd() creates a file descriptor that can be used to accept signals targeted at the caller. This provides an alternative to the use of a signal handler or sigwaitinfo(2), and has the advantage that the file descriptor may be monitored by select(2), poll(2), and epoll(7).
The traditional asynchronous delivery mechanism can be tricky to get right, whereas this provides a convenient fd interface that integrates nicely with your existing event-based code.
In this post I’m going to step through an example web chat system implemented in Python (with Bottle and gevent) that uses long polling to implement a simple publish/subscribe mechanism for efficiently updating connected clients.
My pubsub_example repository on GitHub has a complete project that implements the ideas discussed in this article. This project can be deployed directly on OpenShift if you want to try things out on your own. You can also try it out online at http://pubsub.example.oddbit.com/.
In this article, a followup to my previous post regarding long-poll servers and Python, we investigate the code changes that were necessary to make the code work when deployed on OpenShift.
In the previous post, we implemented IO polling to watch for client disconnects at the same time we were waiting for messages on a message bus:
poll = zmq.Poller()
poll.register(subsock, zmq.POLLIN)
poll.register(rfile, zmq.POLLIN)
events = dict(poll.poll())
.
.
.
If you were to try this at home, you would find that everything worked as described…but if you were to deploy the same code to OpenShift, you would find that the problem we were trying to solve (the server holding file descriptors open after a client disconnected) would still exist.